Dr. Nassos Katsamanis, V.P. of Engineering at Behavioral Signals, was interviewed by PRISMA SCIENCE MAGAZINE; a Greek magazine focused on science, technology, and how these affect society. He spoke on what emotion AI is, why machines should recognize & express human emotions, where these technologies are applied, and how they can impact and benefit businesses.
Dr. Katsamanis introduced Behavioral Signals – a startup focused on bridging the communication gap between humans and machines by introducing emotional intelligence, from speech, into conversations with AI. He explains that emotion AI is the ability of machines to recognize human emotions and then respond accordingly to them. Identifying and understanding human emotions is critical for Artificial Intelligence systems to react and behave appropriately to any situation and smoothly integrate into all aspects of human life. Nowadays, companies are investing in emotion AI after realizing the opportunity for a new type of communication between humans and machines focused on providing the user with a more natural interaction than ever.
Emotion AI aims to substantially improve the user experience by enabling, for example, the development of revolutionary business virtual assistant applications or robot programming for the care of our loved ones. What is worth mentioning is the use of real data for training. This data comes either from call centers or Voice Assistants, and that makes things more complicated since most of them are emotionally different. Therefore one of the biggest challenges in the process is to find, select, and utilize data that will add value to the ML models and help improve overall performance.
What is also of high importance for us at Behavioral Signals is to combine text and voice, ‘what’ is being said and ‘how’ it is said, like in cases where one says something but means something else.
Dr. Katsamanis also highlighted both the challenges and obstacles that arise when developing applications:
Privacy: Humans feel their emotions are private and appear to have genuine concerns about privacy violations. Protective legislation should be expanded to prevent the risks associated with AI, particularly when it comes to the collection, storage, transfer, and use of confidential health information.
Accuracy: Artificial Intelligence must correctly determine the emotional state with high accuracy, especially in regards to bias or system errors, before we characterize a person as more or less emotional.
Security: It is necessary to ensure that Artificial Intelligence applications can respond appropriately to the user so that they do not aggravate their emotional state or accidentally facilitate a negative situation.
Responsibility: Response protocols are required for the proper management of cases, marked by AI technology, as high-risk.
The interview ended with a question on where these technologies can be applied? Τhe concept/sense of emotion AI makes most people think of humanoid robots in a customer service role, Nassos said. Indeed, some companies have added emotion recognition to personal assistants robots so that they can have more human-like interactions. In the last two years, however, Emotion A.I. has been moved to entirely new areas and industries to improve customer experience and save resources. For example, these technologies can be used in video games, where the game adapts to the emotional state of the player; to medical diagnoses of diseases such as depression; to intelligent call center routing, where an angry customer can be routed to a well-trained employee; to the safety of workers with demanding jobs and high levels of stress; and in education.