Don’t miss the latest developments in business and finance.

The problem with voice assistants

Users need to find a balance in the trade off between convenience and loss of privacy

GDPR, data privacy, EU data protection
Devangshu Datta
4 min read Last Updated : Jul 14 2019 | 11:22 PM IST
A recent incident in Belgium highlighted the increasing privacy concerns with voice-activated robotic assistants. A contractor sent audio recordings of over 1,000 Google Assistant conversations to the public broadcaster, VRT. These contained medical information as well as personal data that allowed identification of the user. What’s more, 153 of these recordings should have never been made as the “Ok Google” command wasn’t clearly given. Google has responded with defensive PR on its official blog saying these recordings are anonymised and transcribed for experts to analyse and help make the virtual assistant smarter. 

This problem won’t go away. Google Assistant, Apple’s Siri and Amazon’s Alexa are used by well over a billion people. They make life easier in many ways. Their very existence is a triumph for machine learning.  

But they need to be “trained” by using tonnes of data. Hence, conversations are recorded. In order to improve functionality, conversations are listened to by human teams, to sharpen the assistants’ understanding of natural speech with its pauses and ungrammatical constructions.

Although these conversations are supposedly anonymised, this is, in itself, intrusive. As this breach indicated, the conversations themselves may contain sensitive personal data that identifies users along with locations. Also, it confirmed assistants can switch on and record in error. 

This also happens with Alexa. Users can playback transcripts of Alexa conversations and this recording error happens with some frequency. This glitch happens when the “wakeup” word process goes wrong. Sometimes this can be triggered by random conversation, or some background noise, that the machine “hears” as a wakeup word. In effect, a digital assistant includes a microphone that is always on and listening. The programme is supposed to start recording only when the wakeup is used. But the mike itself is always on. Even if devices offer a push switch to activate, users tend to keep the mike on by default because that is inherent to functionality.

That programme can thus, be started by accident and it isn’t under the user’s control. Those recordings are sent back to the mother servers for analysis, including human analysis. This helps to improve functionality and it is critical to developing the technology that makes assistants useful.

But this process is insecure. Con­t­r­a­ctors hired to transcribe and analyse su­ch recordings may choose to break confi­d­entiality. For that matter, a server hack co­­uld put such data into the wrong hands.  

Anonymising can also lead to poorer functionality. One reason why some users claim Siri lags Google Assistant and Alexa in the things it can do, could be related to anonymising. Siri anonymises to greater extent. Users can’t playback and human analysis is only done with a random tag attached to identify specific conversations. But this also means that Siri doesn’t learn the individual quirks and tastes of specific users.  

Amazon and Google monetise their assistants’ interactions in ways, which makes them reluctant to anonymise. They do big data analysis and meta-data analysis to discover patterns of searches, online purchases, entertainment choices, appointments, medical conditions, phone calls and text messages etc. It is even possible for assistants to accurately gauge the user’s mood and mental health from tonality. 

This analysis builds more complete profiles that can be used for targeted marketing and personalised advertising initiatives. It can seem like magic when Amazon offers you a choice of TV shows, music and books that you really like, or Google throws up ads about destinations you’re interested in visiting. This is actually enabled by this sort of analysis of your online usage patterns and your interactions with assistants. 

That convenience comes with a cost. Users need to find a balance in the trade off between loss of privacy and convenience. This would mean a far greater degree of transparency, and user control over the data being harvested.  Unfortunately, most jurisdictions don’t have legislation that enables this. The European Union’s general data protection regulation (GDPR) is pioneering legislation in this area but most nations don’t have privacy legislation of this strength. That in effect means that the corporate service provider decides how much privacy it will allow.

There have been thought experiments and speculation about what the service provider should do, if an assistant picks up a conversation about criminal or terrorist activity, or hears an incident of domestic violence. That’s a grey area but it’s likely this has happened. The vast majority of recordings would be about the mundane but they would also provide lots of private information. In this version of dystopia, it isn’t only Big Brother that’s watching. 

 

Topics :Data Privacyprivacy lawsApple SiriEurope’s new data privacy rules

Next Story