Powered by TCPDF (www.tcpdf.org)
Digital Assistants Create Privacy Paradox By: Eric C. Boughman Americans place a high value on privacy, dating back to the foundation of our country and the Fourth Amendment right to be secure in our “persons, houses, papers, and effects.” Interestingly, the word “privacy” is not found in the Fourth Amendment. Over time and through legal battles, however, courts have come to recognize a fundamental “zone of privacy” contained within the “penumbra” of rights protected by the Constitution.[1] Now, with advances in artificially intelligent devices and machine learning, individuals willingly sacrifice that hard-fought privacy in return for the many conveniences offered by “smart” digital assistants.
Digital assistants can be wonderfully helpful. They offer the potential to make us more efficient by managing our calendar, contacts and to-do lists. They can bring us the news and weather with a verbal command and can also brighten us up with music and jokes. I have an Amazon Echo at home and considered adding a similar device to my office. But if you consider doing so, be sure to understand that this convenience comes with substantial privacy implications.[2] A digital assistant remains faithfully at your beck and call because it is always “listening.” Apple Inc.’s Siri, Microsoft Corp.’s Cortana, and other digital assistants employ passive listening technology that keeps these devices in constant standby (listening) mode. Google Inc. explains that its Home device listens to snippets of conversations to detect a "hotword.”[3] Amazon.com Inc.’s Echo begins streaming to the cloud “a fraction of a second of audio before the wake word” is spoken.[4] When the wake word is detected, our assistants swing into action, ready to capture and process our inputs to provide the best results.
Service providers typically disclose that they may track and record user interactions to improve the user experience. This allows our devices to become smarter. How? By learning more about us — our preferences, habits and routines. With some devices, you may be able to review and delete your interaction history, but that comes with tradeoffs. Apple cautions that If you delete user data associated with Siri, the “learning process will start all over again.“[5] Deleting your interaction history in Google will limit the personalized features of your Google Assistant. Amazon similarly explains that deleting your voice recordings “may degrade your Alexa experience.”
Digital assistants capture more than just voice data. Microsoft explains that Cortana is most helpful when “you let her use data from your device.” This may include web search history, calendar and contact entries, and location data. It may also include demographic data, such as your age, gender and ZIP code. You may be able to limit access to personal data (for instance, by not signing in to your Microsoft account), but like deleting your verbal interactions, you will lose the benefits of personalization and your experience will be more generic.
Optimum use of a digital assistant requires that you share personal data and thus give up some level of privacy. Beyond enhancing the user experience, however, users may question how else service providers might process, analyze and share our data. What are your rights to restrict the use and dissemination of collected data? Can private parties or the federal government obtain your data through a subpoena, search warrant or court order? Could a government agency simply purchase our data from service providers or data brokers? To challenge a search under the Fourth Amendment, you must have a reasonable expectation of privacy. Is such expectation reasonable in the presence of an always-listening digital assistant? While these devices are generally designed only to record information once a designated "wake" word is spoken, few consider the practical reality that to detect the wake word, the device must always be "listening" for it.
What if a device is unintentionally activated? Accidental activations, through similar sounding words or simple software glitches, create risks of unintended recordings. My daughter has a friend named “Alexia.” Can you imagine the confusion when she visits our home, where our Amazon Echo responds to the wake word “Alexa”? Even worse, researchers have already figured out how to surreptitiously trigger voicecontrolled devices by transmitting high-frequency sounds inaudible to the human ear.[6] What type of data might a smart device capture without our express intent to provide it?
There are risks present in the data that you voluntarily intend to share. Many service providers disclose within their terms of service or privacy policy that user data may be shared with third parties. While the stated purpose is usually to improve the user experience, it is often less clear whether such data may be used or shared for other purposes. In fairness to service providers, data may be anonymized, creating an element of “practical obscurity,” but with today’s computing power, how difficult will it be to piece together several anonymous data points to recreate the source?[7] Data broker Axciom Corp., for one example, is said to have an average of about 1,500 data points per person in its database.[8]
While digital assistants obviously capture voice data from users, which can be converted to text, less obvious is what additional data might be captured about users. In addition to text-based information, digital assistants might capture your voice tone, inflection, volume and behavioral patterns. Information about you is much richer than mere text. Information such as what time you typically interact with your digital assistant, the type of information or music you request, and how you might adjust your smart lights, appliances, and thermostat can tell a lot about your habits and routines. Whether you speak with a male or female voice, old or young, or with an accent, can tell a lot about you. Other information, such as ambient background noise and whether children are present might offer further glimpses into your home or office. Some companies, like Soul Machines and Air New Zealand, are already working on creating machines that can detect human emotion.[9]
Data about users’ interactions has the potential to be incredibly valuable. Beyond personalizing the experience, such data may also be used to influence shopping and travel habits, persuade viewing and entertainment preferences, and perhaps even predict — or manipulate — elections and public sentiment.
Despite the power inherent in this data, the law remains unclear in how such data can be collected, used and shared.
Data generated from user interactions with digital assistants is typically captured and sent to the service provider or a partner for storage and processing. This data can then be analyzed and used to develop and strengthen artificial intelligence through machine learning techniques. Data is vital for this process. Machines need data to learn — the more data points the better — and digital assistants have the power to capture vast amounts of divergent data. Few consider, however, the richness of the various types of data we voluntarily (impliedly or expressly) provide.
What privacy expectations are reasonable when we share so much about ourselves with a digital assistant? European law is wrestling with some of these issues,[10] but outside of the health care and financial arenas,[11] and companies targeting children,[12] U.S. legal doctrine is not currently wellequipped to deal with the treatment of big data and the companies who collect and use it.
One view is that data should be regulated like a commodity. Another view suggests companies should self-regulate through adoption of “responsible use” policies. Responsible use would suggest that companies use data in accordance with the stated purpose for which it is collected. It may also suggest different standards of secrecy and protection based on various types of data collected by digital assistants. For example, shopping lists, “flash news briefings,” and other interactions that involve third parties might be subject to a different privacy standard than daily interactions with a personal fitness monitor, thermostat, or synchronized calendar. Companies collecting data should consider policies designed to provide the appropriate level of privacy protection when processing, analyzing and sharing user data. For companies in the business of collecting user data, this also means that business practices should be consistent with written terms of service and privacy policies.
What about government rights? Under the third-party doctrine, Fourth Amendment privacy protections are lost when otherwise private information is freely shared.[13] In its current form, application of the third-party doctrine suggests that any communication you have with your personal digital assistant may be subject to search and compelled disclosure because it is freely shared with the service provider who may, in turn, share it with third parties. This would clearly seem to be the case with verbal interactions occurring after wake word activation, but what about recordings that may have been unintentional or, worse, surreptitious? Even with intentional interactions, can it really be said that users intend to share so much information about themselves? What if a service provider’s privacy policy fails to mention sharing or, better, states that user information will not be shared?
For attorneys and other professionals subject to client confidentiality rules, there are additional questions that arise as to how the presence of an artificially intelligent, always-listening assistant may impact privilege concerns. Privileges, like the attorney-client privilege, generally protect the confidentiality of
communications between certain professional advisers and their clients. Although held as sacrosanct by courts, the privilege of confidentiality can be lost when the substance of communications is shared with a third party. Courts routinely find that information in the hands of third parties is not protected by any privilege. Does the presence of a digital assistant — one that may listen and share information — put that privilege at risk?
To what degree might a court carve an exception to privacy or privilege protections for information recorded through digital assistants? Would it make a difference if a recording was intentional or inadvertent? Might the protection depend upon whether the information consisted of content (i.e., voice recordings) or, instead, other information about a certain interaction, such as the date, time, and length of a meeting as well as any “subject” as shown on a synced calendar?
The technology landscape is moving fast and current legal doctrine is often ill-equipped to deal with new issues. We live in the information age — in the age of big data. This data can be used to enrich our lives but it also has the potential to provide vastly more information about users than what users expressly intend. For now, users need to be aware of the privacy paradox offered by artificially intelligent digital assistants. Such devices have the potential to be extremely helpful, but understand that their helpfulness is a product of the sometimes very personal data we, as users, provide. The more data you provide, the better your digital assistant will perform. And, the more data you provide, in its various forms, the more your service provider (and potentially its partners) will know about you.
Eric Boughman is an attorney at Forster Boughman & Lefkowitz in Maitland-Orlando, Florida.
The opinions expressed are those of the author(s) and do not necessarily reflect the views of the firm, its clients, or Portfolio Media Inc., or any of its or their respective affiliates. This article is for general info rmation p urposes an d is not intended to be and should not be taken as legal advice.
[1] See Griswold v. Connecticut, 381 U.S. 479, 484 (1965).
[2]See Boughman, Eric, et al., “Alexa, Do You Have Rights?”: Legal Issues Posed by Voice-Controlled Devices and the Data They Create, available at https://www.americanbar.org/publications/blt/2017/07/05_boughman.html.
[3] “Data security & privacy on Google https://support.google.com/googlehome/answer/7072285?hl=en.
Home,”
[4] “Alexa and Alexa Device FAQs,” https://www.amazon.com/gp/help/customer/display.html?nodeId=201602230.
available
available
at
at
[5] “This is how we protect your privacy,” available at https://www.apple.com/privacy/approach-toprivacy/.
[6] See Zhang, Guoming, et al., “DolphinAttack: Inaudible Voice Commands,” available at https://arxiv.org/pdf/1708.09537.pdf.
[7] SeeDOJ v. Reporters Comm. for Free Press, 489 U.S. 749 (1989) for a discussion about “practical obscurity.”
[8] See Singer, Natasha, “Mapping, and Sharing, the Consumer Genome,” available at http://www.nytimes.com/2012/06/17/technology/acxiom-the-quiet-giant-of-consumer-databasemarketing.html.
[9] See https://www.soulmachines.com/blog/airnewzealandandsoulmachines.
[10] See EU General Data Protection Regulation, as approved April 14, 2016 (and scheduled to take effect on May 25, 2018), available at http://www.eugdpr.org/.
[11] See generally, Health Insurance Portability and Accountability Act of 1996 (HIPAA) Privacy Rule, 45 CFR Part 160 and Subparts A and E of Part 164,as to medical records; and the Gramm-Leach-Bliley Act (GLBA),15 U.S.C. §§ 6801-27, as to financial institutions.
[12] SeeChildren’s Online Privacy Protection Act (COPPA), 15 U.S.C. §§ 6501-06, which pertains to children under age 13.
[13] See United States v. Miller, 425 U.S. 435 (1976); Smith v. Maryland, 442 U.S. 735 (1979).