It has been said that the term “eavesdropper” evolved from those who stood under the eaves of a house to surreptitiously listen to the goings-on inside. In this age of digital advancement, we now invite eavesdroppers into our homes and offices in the form of artificially intelligent digital assistants. While devices like the Google Home, Apple’s Siri and the Amazon Echo offer great convenience and enjoyment, there are privacy trade-offs; and some are less obvious than others.
When you welcome one of these devices into your home or workspace, you add a digital device that is at your beck and call because it is always “listening.” For instance, Google Home listens to snippets of conversations to detect the “hotword” and Amazon’s Echo begins streaming to the cloud “a fraction of a second of audio before the wake word” (typically, the word is “Alexa”) is detected. When we use the “hotword” or “wake word” to summon these devices, it should come as no surprise that our interactions are tracked and recorded by the device’s service provider. You can review and delete your history but that comes with trade-offs as well. Google explains that deleting your interaction history will limit the personalized features of your Google Assistant. (Look here to view your interaction history with Google.) Amazon similarly explains that deleting your voice recordings “may degrade your Alexa experience.” Apple is more elusive. It has stated that it will anonymize and encrypt voice data from its forthcoming HomePod speaker, but less clear is what Apple actually intends to do with that encrypted data.
Data generated from user interactions with artificially intelligent digital assistants is typically captured and sent to the service provider’s cloud for storage and processing. This data, which we voluntarily provide, can then be analyzed and used by the service provider in machine learning to develop and strengthen artificial intelligence systems. Data is vital for this. Machines need data to learn—the more the better—and digital assistants have the power to capture vast amounts of it. Few consider, however, what happens to the data we provide or, for that matter, even the type of data we provide.
Digital assistants obviously capture voice data from the user which can be converted to text. But less obvious is the information captured about the user. Information about you is much richer than mere text. Do you engage your digital assistant at the same time every morning? Do you speak with an accent? What type of mood were you in when you asked for that Van Morrison song? Do you regularly turn down your “smart” thermostat and dim your lights at the same time each evening, except on Saturdays? What type of ambient background noise is typically present? How many people live in your home? Are there any children?
In addition to text-based information, digital assistants might capture your voice tone, inflection, volume, and behavioral patterns. Data about user interactions has the potential to be incredibly valuable. Big data has become a catch-phrase for the industry of companies collecting, analyzing, and processing vast quantities of data. Some companies, like Soul Machines and Air New Zealand, are already working on creating machines that can detect human emotion and communicate empathically. While this may improve customer service experiences, it may also be used to influence shopping and travel habits, persuade viewing and entertainment preferences, and perhaps even predict—or manipulate—elections.
Here in the United States, we enjoy a right to be free from government intrusion into our private lives, but that right exists only when we have a reasonable expectation of privacy. What privacy expectations are reasonable when we share so much about ourselves with a digital assistant? European law is wrestling with some of these issues, but outside of the health care and financial arenas, and companies targeting children, U.S. legal doctrine is not currently well-equipped to deal with the treatment of big data and the companies who collect and use it. For now, courts will need to deal with issues on a case-by-case basis.
Beyond the content of your requests, what about other information about you? Companies tend to be less clear about what happens with the non-text based data they capture, store, and share about their users. Besides your written interactions, what other information does your service provider share? Usage habits? Calendar details? Shopping lists? These are some of the questions to ponder when using your digital assistant.
We live in the information age- in the age of big data. This data can be used to enrich our lives but it also has the potential to provide vastly more information about users than what users expressly intend. By now, many are aware of Amazon’s fight with law enforcement over the disclosure of Echo transcripts. How many also know that police relied on data from a smart water meter showing abnormally high water usage as evidence in its investigation? As companies continue to collect data about users—and as predictive models for this data are fine-tuned, courts will need to redefine the parameters of reasonableness when it comes to the expectation of privacy.
For now, users need to be aware of the privacy paradox offered by artificially intelligent digital assistants. As the saying goes, if you don’t know what the product is, then you are the product. The question is ripe with respect to digital assistants: Are we the customer, or are we the product? Or, is it a combination of both?
Eric Boughman is a partner with Forster Boughman & Lefkowitz in Maitland. His practice encompasses legal issues affecting businesses and entrepreneurs, including corporate governance matters, business disputes, internet, software and technology issues, privacy and asset protection.