Powered by TCPDF (www.tcpdf.org)
Is There An Echo In Here? What You Need To Consider About Privacy Protection By: Eric C. Boughman If you have an Amazon Echo, try this: Say, "Alexa, tell me a joke," but do it very quickly so that you finish the request before Alexa "wakes up" (indicated on the Echo by the blue light). Did you notice that Alexa dutifully complied, seemingly catching the request before she (it?) was awake? There is a simple explanation for this: Alexa (like other artificially intelligent digital assistants) is always listening. Indeed, Alexa starts recording "a fraction of a second” before the wake word. Google Home listens to snippets of conversations to detect the "hotword."
After becoming more familiar with Alexa at home, I considered adding an Echo or similar device to my law office. I imagined the added convenience of having my own artificially intelligent digital assistant in the office. She could make notes and calendar entries, add items to my checklist, tell me who I'm meeting for lunch and where, and perhaps add time entries and quickly retrieve obscure facts, all with a simple verbal command. But since smart devices like Alexa are always listening, the added convenience comes with a tradeoff – one with substantial privacy implications. How comfortable would you be knowing that transcripts of your verbal interactions are kept by many digital assistants' service providers?
What are your rights to restrict the use and dissemination of collected voice data? Can private parties or the federal government obtain this data through a subpoena, search warrant, or court order (or without)? To challenge a search under the Fourth Amendment, you must have a reasonable expectation of privacy. Is such expectation reasonable in the presence of a digital assistant? While these devices are generally designed only to record information once a designated wake word is spoken, few consider the practical reality that to detect the wake word, the device must always be listening for it.
What if a device is accidentally activated? In a recent client meeting, someone answered in agreement to a question, beginning with, “sure, he can do that …” On a nearby iPhone, Siri heard her name and began actively listening. Even scarier: A friend recently explained that he loves his new Samsung Galaxy phone but is annoyed that Bixby (Samsung’s AI assistant) is often triggered unintentionally and seems to have a mind of his own. Accidental activations, often through similar sounding words or simple software glitches, create risks of unintended recordings.
Additional risks are present in the data you intend to share. Your privacy expectations may be undercut by a service provider's terms of service or privacy policy for a given device. For instance, as disclosed by Alexa’s terms of use, if you access third-party services and apps through Alexa, Amazon (naturally) shares the content of your requests with those third parties. Amazon further discloses that data you provide may be stored on foreign servers. As such, U.S. Fourth Amendment protections may not apply.
Amazon handles the information received from Alexa in accordance with its privacy policy. Your interactions with Alexa, including voice recordings, are stored in the cloud. You can review and delete them, but Amazon explains that deleting them may degrade your Alexa experience. Google similarly explains that deleting your interaction history will limit the personalized features of your Google assistant. Artificially intelligent devices need data from users – the more the better – to learn and adapt. The privacy paradox is that users must, therefore, agree to sacrifice some degree of privacy to enrich the user experience.
Companies like Amazon and Apple have made headlines vigorously defending their customers' privacy. But what about third parties from whom they subcontract services? Apple is notoriously stingy about sharing information, but both Google and Amazon acknowledge sharing information with third-party providers, generally to "improve the customer experience." Will these third parties – some perhaps overseas – defend privacy as vigorously if challenged?
Under the third-party doctrine, Fourth Amendment privacy protections are lost when otherwise private information is freely shared. In its current form, application of the third-party doctrine suggests that any communication you have with your personal digital assistant may be subject to search and compelled disclosure because it is freely shared with the service provider. This would clearly seem to be the case with verbal interactions occurring after wake-word activation, but what about recordings that may have been unintentional, or worse, surreptitious?
For attorneys, there are additional questions that arise as to how the presence of an artificially intelligent, always listening assistant may impact attorney-client privilege. The privilege, which protects the communication between an attorney and their client as confidential, is generally held as sacrosanct by courts, but it can be lost when the substance of those communications is shared with a third party. Moreover, courts routinely find that information in the hands of third parties is not protected by attorneyclient privilege. Does the presence of a digital assistant put that privilege at risk?
To what degree might a court carve an exception to privacy or privilege protections for information recorded through digital assistants? The technology landscape is moving fast and current legal doctrine is often ill-equipped to deal with new issues. For now, if I decide to add a "smart" digital assistant to my office, I'll be sure to unplug or deactivate it during any meetings that I wish to remain confidential.