Emotion-Recognition-by-Voice-with-Rana-Gujral-Voice First Health_Episode_55

Why is it essential that machines can recognize and express human emotion?

Rana Gujral, CEO of Behavioral Signals, was hosted and interviewed at Voice First Health by Teri Fisher founder and host of the podcast. Rana introduced Behavioral Signals, a company that is trying to bridge the communication gap between humans and machines by introducing emotional intelligence, from speech, into conversations with AI. They discuss the AI technology and research behind emotion recognition, how it can improve human to human, and human to machine interactions, how you can predict intent, and what sort of KPIs can businesses target with this technology. They also discussed ethics and how technology can be misused by bad actors.

Listen to the podcast – VFH Episode 55, or grab the highlights below.

Emotion Recognition by Voice with Rana Gujral of Behavioral Signals

Episode Key Points

At the core of Behavioral Signals’ engine processes, is the variety of outputs when an interaction is being recorded. They go after who spoke when (diarisation) where they deduce basic emotions like anger, sadness, happiness, and frustration. Behavioral Signals also go after specific aspects of tone change which is the trend of positivity within the duration of a conversation.

The company is less hung up on what is being said and are focused more on the emphasis behind the words. That tells them a lot.

-An example is a study that was done at Yale University where they took a piece of content from YouTube and just took the audio out of it. They then deduced the emotional behavioral signals from it and mapped it out. Then they turned the video on and analyzed both the audio and the video, looking at the facial expressions and body language of the people in the video, and added that to the emotional and behavioral map that they were piecing together.

-The natural expectation there was that when adding those additional data points, in addition to just processing the audio piece, they could get more accurate, but what they found was that they actually became less accurate.

-That meant that when they were mapping based on audio-only; there was a higher score than when they were also looking at the video. What they realized is that we as humans are fairly adept at masking our emotions through our facial expressions, but we can’t do that with our tone of voice.

-Scientists have proven that if one can accurately create an emotional behavioral map of an interaction, or decipher the cognitive state of mind of a participant, and understand the context of that interaction, they can predict what that person will do in the near future, and in some ways predict intent.

-For example, Behavioral Signals is working with collections and banks to predict if a debt holder is going to pay their debt or not, simply by listening to a voice conversation. They have been able to do that with a very high level of accuracy.

-They are also working with another company that is building a software platform to cater to patients with depression. The company using Behavioral Signals’ technology to predict a propensity for suicidal behavior.

#EmotionAI

Our privacy policy has been updated. You may find the updated policy here: https://behavioralsignals.com/privacy-policy/

X