Glyn Moody, is a freelance journalist who writes and speaks about privacy, surveillance, digital rights, open source, copyright, patents and general policy issues involving digital technology. He wrote, on Privacy News Online, an article on how digital technologies are increasingly being applied to the field of customer service, and how the new generation of advanced, computer-aided systems raise important issues about privacy.
He raises some serious concerns when it comes to data storage and processing, mentioning our company along the way… “The use of NLP (neuro-linguistic programming) or AI is one of the key features of these new customer service systems. The problem is that in bringing together many disparate sources of information, and applying AI and other advanced techniques, it may be possible to glean extra, possibly highly-personal details that the customer had no intention of revealing. Inferences may be made, and decisions suggested, all without the customer knowing, or being able to challenge them. Another feature that a unified pool of messaging data makes possible is “sentiment analysis”, to provide human agents with the context they need to personalize future customer interactions. That’s the focus of another company in this new field, Behavioral Signals, which applies advanced analysis to the voices of customers”
What to know when it comes to AI and Privacy
We definitely understand his concerns and do believe that in some cases data could be used in an unlawful manner. On one hand there are regulations being put in place to protect data and clarify how companies process them and on the other hand data is anonymized before it is processed.
The basic key factors to consider, when it comes to AI and privacy, are:
1. There are regulations on how data can be stored and processed in order to protect the customer (ex. GDPR).
2. Companies like Behavioral Signals have to comply with security and privacy protocols, like SOC2, when handling data.
3. Most AI companies that process data for analysis purposes usually have their software installed on premises, which means the data never leaves the company that collects them, for example a Customer Service provider.
4. Data is anonymised (wikipedia: a type of information sanitization whose intent is privacy protection) before they are processed, so it is difficult to know who is behind a call. It would take effort and a high cost to de-anonymize data, and why would any enterprise want to go into that trouble for a wild goose chase…
In general companies who own the data – like customer service providers – are every careful when handling them. They have to comply with security protocols, state regulations and their own reputation. No company wants to become a newspaper headline about how their data was breached or misused. That’s why they make 3rd parties, like Behavioral Signals, jump through hoops when it comes to security and handling of their data.
Also, keep in mind that some of the data processed is not actually unique for an individual human. Like emotions. They’re more universal. Here, at Behavioral Signals, emotion and behavior recognition happens on the how you say something, not what you actually say. Think of it like your wife asking you if you “…remembered to buy milk?” before coming home. There’s a ton of ways she could be expressing her question… lovingly, sarcastically, judgementally or just neutrally. That’s what Behavioral Signals’ AI technology ‘listens’ to… the how.
We strongly believe consumers need to be aware of their privacy and their rights. They must be informed in order to protect themselves. So head over to Privacy News Online and Glyn Moody’s interesting article to read about When AI-enhanced customer service is on the line so is your privacy.