AI and Your Emotional Privacy
Remember when your parents used to argue and then they toned it down so as not to “upset” the children? Perhaps you even modulate your tone when your dog is in the room so as not to see his fuzzy face frown. Your kids’ emotions are important, and they are infinitely empathetic. When you are tense, they pick up on your vibes.
Well now, there’s a new kid in town: artificial intelligence (AI).
Your smart device can deduce your emotional state from the inflection in your voice and the pitch of your volume. This is a wondrous breakthrough that can bring your digital assistant into your inner sanctum. By understanding how urgent your needs are, Alexa, for example, can execute your requests more efficiently.
But what are the downsides of welcoming a virtual mind reader into your life? Can AI be too emotionally adept for our collective comfort? How can we create boundaries between our feelings and our objective needs? These are vital questions to address as we embark on the next chapter in our shared symbiosis with machine learning. We want to make sure that we are on the same linguistic page with our AI companions.
Ethically Speaking
Let’s face it: we rely on our smart devices. Heck, you could say we have a relationship with them. We call up social media to feel connected with actual humans, especially in the age of social distancing. We trust AI to sort through our likes and dislikes so it can spoon-feed us content that immerses us in pleasure. And we even consult dating apps to envision our picture-perfect romantic future.
Smart technology is our life partner, so maybe it’s time to get a marriage counselor.
Case in point: sexual orientation. If a member of the LGBTQ community is interested in meeting someone with whom they can consort, they might download one of the aforementioned dating apps. So, whom will they meet? And how does AI factor into this love connection?
Machine learning has attempted to pioneer its own form of voice technology that aims to recognize sexual orientation. By analyzing facial features, speech patterns, and other related mannerisms, AI figured it could identify LGBTQ individuals with robotic accuracy. The results were 70% correct, which means the AI was misguided 30% of the time. So, what did they do with this data? Did they sell it to advertisers? What brands cater to the LGBTQ family? More importantly: which brands do not?
These are all very important ethical questions to ponder. Our emotions open us up to wonderful opportunities in life, but they can also open us up to prejudice. If the faulty sexual orientation identifying program described above fell into the wrong hands, the ramifications could be disastrous. Simply by analyzing our emotional expressions, technology could be used to sort us, categorize us, and limit us. That is why ethics are at the forefront of responsible innovation.
Bot Bias
The LGBTQ community is not the only vulnerable population with regard to prejudice. The Black Lives Matter movement has opened our eyes to the inequality that African Americans face every day in every city on Earth. Bias is often learned, which is why we need to break the cycle of racism and teach tolerance…
But is it possible that we are also teaching bigotry?
When AI is improperly designed, it can engender the biases of its creator. For example, one shocking study showed that facial recognition programs misidentified Black faces as inherently angrier than White faces, even when the African American photo in question was smiling warmly.
How can we emerge from these disturbing trends and emerge into a more understanding tomorrow? It is our FATE to figure it out.
Because You Gotta Have F.A.T.E.
As scientists, we must not simply wish for a better future; we need to implement a system to make it so. Principles must guide our hand to ensure that emotion AI does not fall victim to the same prejudices that have been passed down from human to human for centuries.
To summon our best destiny, let us consider our FATE.
Fairness – The best presentation is representation. Diversity is a gift, and we must cherish it by amplifying everyone’s voice, not just the ones that reflect our culture and perceptions. By employing more people of color, more women, and more LGBTQ individuals, tech companies can create a reflection of their consumers. When diversity is embraced, AI can be imbued with emotional responses that favor understanding rather than judgment.
Accountability – It is one thing to decree diversity, but it is another to enshrine it. Hold your tech companies accountable and make sure they reflect your values.
Transparency – If you are unable to learn how your smart device thinks, that is a red flag. Smart devices are designed to study your emotional needs, but companionship is a two-way street. Test your device. Teach it tolerance. Make sure you are not trusting your privacy to a program that may abuse it.
Ethics – This is the crystallizing point that we must always consider. How is AI being implemented to ensure my privacy and data are protected? It is an essential question for businesses to answer as they engage in customer relations that are increasingly automated and surprisingly emotionally adept.
Advise and Consent
Smart companies employ smart technology. Instead of simply foisting a monotone recording on consumers who dial a customer service number, businesses are integrating conversational analytics into their protocols. The AI on the other end of the line can detect emotional cues, sort out the needs of the consumer, and direct their call accordingly.
But this newfound convenience must not sacrifice privacy.
Users do not want their data to be misused, especially sensitive data such as emotion. That is why consent is vital to the human/AI interface. But consent is not a simple matter of yes and no. There are layers to the permission process, and businesses must understand this before implementing an AI system that abuses people’s trust.
Rather than just issue a blanket statement that says the call is being recorded for quality assurance, companies must provide actual assurance. When a consumer is expressing raw emotion, they are not in a state to consent. Smart AI must decipher excessive emotions and defuse the situation. If a caller uses a threatening voice, for example, the AI should issue a warning that such language is not tolerated by the company and the call may be terminated.
Just as AI learns from us, we must learn from AI. Perhaps a “time out” is what some consumers need. Until that time, the public must be assured that their data is not being exploited. Emotion AI is truly a breakthrough, but it must not break our trust.