Behavioral_Signals_Rana_Gujral_HACKERNOON Voice and Privacy

Voice Analysis and your Privacy

Rana Gujral writes, in Hackernoon, about the public’s privacy concerns regarding face, voice & emotion recognition as biometric identifiers. He argues voice, when it comes to emotion recognition, can be more accurate and less invasive, as a practice, compared to facial recognition.

As the use of biometric identifiers, like facial scanning, is becoming widespread around the world for surveillance, so public concern about anonymity and privacy is increasing. These concerns question the purpose of this technology and its accuracy as no-where near what companies say.

As Rana says “What makes facial recognition and conversational AI unique from a privacy perspective? To start, we wear our faces everywhere. A thousand cameras could be capturing your every move and you’d never be the wiser. Your voice you control. You consciously decide to speak and can control what you say, and to some degree, how you say it.

While you can certainly be recorded without your knowledge, you have control over what is said and when. Both public and private use of facial recognition have come under fire in recent weeks, as have the revelations that tech company employees and contractors have access to recordings from voice assistants. Both methods will need to be regulated to meet basic privacy requirements, but in the long term it will be easier for people to feel they have control over who uses their voice than their face. 

At the same time, users are overwhelmingly willing to share data if it means a more personalized experience, as long as the companies with which they share that data are transparent about its use. With 38% of information conveyed by speech and tone of voice, the more personalized a voice interface becomes, the more accurate it will be. Trust and transparency will make this viable and acceptable to many consumers if implemented properly”.  

Short but thought-provoking article you should read.