Alexa, Help Me Be A Better Human Redesigning Conversational Artificial Intelligence for Emotional Connection
by Evie Cheung 1
2
Alexa, Help Me Be a Better Human Redesigning Conversational Artificial Intelligence for Emotional Connection
by Evie Cheung
3
Copyright, 2019 by Evie Cheung All Rights Reserved First Printing: May 2019
4
Abstract
This project is a year-long exploration of artificial intelligence (AI) as a tool to explore human psychology. In this investigation, design is a vehicle to interrogate the status quo of the AI domain and envision new applications for AI in the psychology space— specifically through the lens of women of color. Various research methodologies were utilized in this process, including interviews with subject matter experts, user research, and a co-creation workshop. This research informed the creation of five speculative design provocations that prioritize emotional honesty and inclusion. Ultimately, this work intends to bridge the ever-widening gap between technology and humanity. It is a counter perspective that imagines: What if AI can make us more human?
5
14
01. Introduction It's an AI world and we're all just living in it. Technology has infiltrated our daily lives; what are the benefits and consequences?
22
02. The Landscape What are experts worried about when it comes to artificial intelligence? Hint: it's not about killer robots.
30
03. Research Methods A summary of research methodologies used to create this work that includes sacrificial concepts and a co-creation workshop.
44
04. Early Prototypes Initial prototypes that helped me explore how to counteract bias in three of the top uses of artificial intelligence.
60
05. Imagining a New World To what extent was I upholding the system that I wished to critique? This was a major turning point in my journey. 6
66
06. Rookee: A More Emotional AI If AI can increase emotional connection, does AI itself need to be more emotional? I explored this question with Rookee.
82
07. Affie: AI As Part of the Family What if AI could say all the things that I cannot? Affie is a smart speaker disguised as a vase that sits on your dinner table.
94
08. Sigma: AI as Your Friend What if AI could call you out on selfdetrimental thoughts and behaviors? Sigma is an open-source mental health community.
104
09. BluBot: AI as Your Therapist What if AI could act as your therapist to help you learn about yourself? I brought this question to life with a public intervention.
136
10. Conclusion What are next steps? What are our individual roles and responsibilities in the creation of artificial intelligence? 7
8
For the designers, psychologists, and intersectional thinkers who continue to work to ensure a human future for the technology that we interact with every single day.
9
10
Preface A designer, machine learning engineer, and psychologist walk into a bar... The journey to this work started when I was studying psychology in college. I struggled to come to terms with the whitewashing of an entire field that claims to make sense of human nature with the intention of improving people’s lives. The majority of American Psychological Association (APA) recognized studies were conducted with white middle-class Americans. This group, typically college-educated, is often the most easily accessible for the researchers, and, as a result, the findings present a view of psychological norms embedded in a narrow cultural experience. The psychological experiences and traumas of immigrants and first- and secondgeneration children are almost entirely absent. Often times they are seen as edge cases, which are only studied in courses dedicated to “cross-cultural psychology.” These courses are never mandatory; psychologists are never “required” to understand the experiences of communities of color, drastically impacting the gap in mental health services available for people of color. The few times that the dreaded topic of race infiltrated the classroom, usually brought up by me or another student of color, it was immediately glossed over by the professor countering with a blanket statement of universal human experience, in that all people, regardless of race, have the same experience. In this rebuttal, the professor would usually allude to the Diagnostic and Statistical Manual of Mental Disorders (DSM) as the authority for defining normative mental illness and human experience. Nevermind the fact that the DSM is a publication controlled and written by the APA. I later discovered that this same systemic racial bias is also rampant in design and technology, manifested in products like Google’s image recognition algorithm that classified black individuals as “gorillas,”1 and Facebook’s “racist soap dispenser.”2 After graduating, I landed in the tech startup world on the in-house marketing team for an advertising technology company—evangelizing the religion of big data and social advertising. But what the hell was this technology, and did we understand the ramifications of it? Of course not, but as a newly graduated woman of color navigating the white-male-dominated professional world, I kept my mouth shut. 1 Vincent, James. “Google ‘fixed’ its racist algorithm by removing gorillas from its image-labeling tech.” The Verge. 12 January 2018. 2 Fussell, Sidney. “Why Can't This Soap Dispenser Identify Dark Skin?” Gizmodo. 17 August 2017.
11
By keeping quiet to blend in, I felt both privileged and oppressed—like I had been let in on their technological secrets, I just had to play along. It is shockingly lonely to be surrounded by white affluence your entire life and never be fully accepted by it. But then things started to change. With the Cambridge Analytica Facebook Scandal of 2016, the general public began to see the tip of the big data iceberg. I myself began to question the unintended consequences of technologies through my MFA degree at the School of Visual Arts in New York City. When I began this thesis journey, I had a very different idea of what I wanted to do with this project than where I would end up. Initially, I wished to explore artificial intelligence as it intersects with race and gender using speculative design. Specifically, I aspired to reimagine the TV series Westworld, an amusement park that represented the affluent white man’s dream in which they can fuck or kill whoever they want without consequence, explicitly from the perspective of women of color. I believed that I could execute on such a visionary mission at a progressive graduate program in one of the most diverse cities in the world. I was wrong. The same systemic racial bias would creep into my thesis in the first few weeks, as I was told by several white male colleagues that my work offended them. I was determined to push forward, but received little support from both faculty and other classmates. Halfway through, I realized that this was not the work that was going to be properly incubated in this program. Little by little, I started to remove color from my thesis—literally by removing gradients and muting my color palettes, and figuratively by silencing my argument for designing for racially-specific user groups, and instead designing for the general user of “humans.” Yes, I could have been stronger in fighting for my vision and speaking out, but shouting upon racially deaf ears is exhausting and takes a mental toll. While this is not the thesis that I had initially envisioned, it tells a more nuanced story of the metanarrative that I have watched unfold many times over the course of my life. Instead of a resounding battle cry, my thesis has morphed into a Trojan horse. It is less angry. It is less loud. But I’m beginning to make my peace with it. For any real social change to happen, history needs both individuals shouting outside for justice and people quietly chipping away at the system from within. Pressure from both sides ensures that the walls in between will crumble.
12
This graduate design thesis lives at the intersection of my personal experiences in the fields of psychology, technology, and design. Specifically, how will the rise of artificial intelligence and machine learning alter human behavior and interaction? How will it affect the mental models of the next generation and how they perceive and experience the world? How might design play a significant role and what is our responsibility in all of this? This work dissects the use of ubiquitous conversational artificial intelligence, such as digital voice assistants. It calls into question the racist and sexist values embedded in this technology that “augments” our lives so seamlessly. In this investigation, design is a vehicle to interrogate the status quo of the AI domain and envision new applications for AI in the psychology space—specifically through the lens of women of color. It offers a counter perspective, a challenging of the status quo, and a rewriting of technology’s dominant narrative. We are currently at a pivotal moment in the development of artificial intelligence. I am here to ask the question of “what if?” as a cautionary interrogation, but ultimately as a hopeful provocation. We have designed the messy world we occupy now, but we are also more than capable of designing a more humane reality—and we can utilize, not shun, artificial intelligence to do it. Here we go. Enjoy.
Evie Cheung May 2019
13
It’s an AI world
1 14
Your Amazon Echo wakes you up at 7 a.m. Alexa greets you good morning. As you’re lying in bed, you reach for your iPhone. It scans your face and unlocks it. A notification pops up reminding you about your new client meeting this morning downtown. You open up Google Maps for the best possible route driving into the city. It gives you three possibilities; you pick the one without significant slowdowns. As you’re getting ready, you’re unsure what to wear; the early spring weather has been fickle. You ask Alexa what the weather is like outside; she tells you it will be partly cloudy with a high of 52 degrees. Great. Light jacket it is.
Figure 1.1 One of the most ubiquitous conversational AI products: Amazon Echo
THE UBIQUITY OF ARTIFICIAL INTELLIGENCE
ALEXA, GET ME A COFFEE Before you’ve even gotten out of bed, you have already interacted with artificial intelligence multiple times. You’ve interfaced with natural language processing via Amazon Alexa, facial recognition via your iPhone, and machine learning via Google Maps. Artificial intelligence (AI) has already infiltrated nearly every part of our daily lives in invisible ways. Frequently, AI is used as a window into our personal lives. Machine learning algorithms are deciding what movies you watch next, approving or rejecting you for your next job, and keeping track of your location at all times. They do this by infiltrating our lives with products and services that we have become dependent on and cannot live without.
image via Unsplash
As a result, these technologies can augment our lives, providing entertainment and convenience in ways that continue to capture our attention and fill our time. These technologies can augment our lives, but there is a fine line between augmentation and impairment. Navigation apps have enabled us to find the quickest route from point A to point B, but have handicapped us into being incapable of getting anywhere ourselves. Alexa has made it seamlessly easy to play music or check the weather. What price are we are paying for this frictionless world? There is significant time and resources dedicated to ensuring that a user doesn’t have to push a button to play music or get up to go outside to check the weather. We are over-optimizing and automating everything just for diminishing returns on convenience and minimal delight. There is an unmistakable connection between AI that we use everyday for seemingly innocuous 15
purposes and AI that is being used in systems with much higher stakes, like determining how long prison sentences should be or which healthcare plan someone qualifies for. The embedded values are the same: optimization is always the objective; humans spending less time on previously human tasks is a positive change for society.
DEFINING ARTIFICIAL INTELLIGENCE
This sinister message is often obscured by delightful interface design. Technology companies like Amazon and Google can cloak AI in a smart speaker decoy and friendly voice assistant. In the war for our attention, machine learning algorithms collect and harness our data serve us highly targeted advertising, relevant to our needs, or individualized content that speaks to us. In this work, I focus primarily on the use of conversational artificial intelligence. Because of its consumer-facing nature and integration into many everyday products such as Alexa, Siri, and chatbots, it is easily digestible and an immensely powerful design tool. Currently, conversational AI is mainly used for purposes that live within the backdrop of capitalism. They are usually devices designed to increase consumption or convenience to
We have a responsibility to develop AI’s capacity to better understand our own humanity and to bring us closer together. consumption, such is the case with Amazon Alexa, or answer questions around a specific product, such as virtual assistants available within banking apps. But because of conversational AI’s unique nature and capacity for emulating human behavior and intelligence, it has significantly more potential than what we are currently using it for. We have a responsibility to develop AI’s capacity to better understand our own humanity and to bring us closer together. The speculative projects that follow propose the ways in AI, specifically conversational AI, can be co-opted and re-prioritized to make us more human. 16
Figure 1.2 Illustration of AI
image via Pinterest (Envato)
Because of its immaturity as a technology, and its rapid development, artificial intelligence today is loosely defined. It is a “general term that refers to hardware or software that exhibit behaviour which appears intelligent.”3 While there are different schools of thought of what constitutes “real AI,” for this thesis, we will be defining artificial intelligence broadly. This definition is informed by MIT technology review, which states, “in the broadest sense, AI refers to machines that can learn, reason, and act for themselves.”4 Natural Language Processing (NLP) and Conversational AI There are many subfields of artificial intelligence which include machine learning, deep learning, image recognition, and natural language processing. This thesis will be focused on the last subset: natural language processing (NLP), which is “focused on enabling computers to understand and The State of AI: Divergence. MMC Ventures, in partnership with Barclays UK Ventures. 2019. 4 Hao, Karen. “Is this AI? We drew you a flowchart to work it out.” MIT Technology Review. 10 November 2018. 3
process human languages, to get computers closer to a human-level understanding of language.”5 You may have heard it referenced colloquially as “conversational AI.” Computer and machines are great at analyzing large amounts of structured data (think: database tables and organized financial records) and thinking in binary. However, human beings communicate with words, which are unstructured data. This is because there are countless ways to string words together to convey specific meanings.
one”). This is because there are no standardized techniques to process unstructured data. NLP is the underlying technology that powers conversational AI applications such as speechbased assistants, chatbots, sentiment analysis, and market intelligence. IBM, a leader in developing AI, defines conversational AI as “a type of artificial intelligence that enables software to understand and interact with people naturally, using spoken or written language.”6
Figure 1.4 Ted talks with Samantha in the movie Her, via Mashable
Figure 1.3 Illustration of natural language processing via Pinterest
Example 1: I have to go to the store today, but do not want to because it is cold outside. Alternative: I’ve to go to the store today, but don’t want to because it’s cold outside. Alternative: It’s cold outside. Therefore, I don’t want to go to the store even though I have to. Alternative: The weather is cold, so I don’t want to go to the store.
Conversational AI has been prominently featured in films such as the character HAL 9000 in 2001: A Space Odyssey and Samantha in Her. These two portrayals of AI are completely opposite: HAL 9000 illustrates AI as malicious with the ability to kill, while Samantha manifests a more human and nuanced side of AI—the power to build intimate connection with other human beings. It is critical that we take note of the gendered nature of these AIs.
We could do this forever. While humans are able to adapt to understand different meanings when they learn languages, machines have a much more difficult time. These above phrases are interchangeable to humans, but to machines, they are interpreted differently. Additionally, in cases where idioms are used, machines understand those very literally (i.e. “That’s the last straw!” or “I missed the boat on that Seif, George. “An easy introduction to Natural Language Processing.” Medium. 1 October 2018. 5
Figure 1.5 HAL 9000 in 2001: A Space Odyssey, via Google Winchurch, Emily. “What’s happening in conversational AI.” IBM blog. 21 February 2019. 6
17
A DESIGNER’S RESPONSIBILITY IN UNDERSTANDING TECHNOLOGY
Figure 1.3 Illustration of face recognition
image by Tanya Lobach via Pinterest
At this point, you may be wondering, what is a designer’s role in the creation of artificial intelligence? Helen Armstrong, a machine learning designer and previous chair of AIGA, states,
As many designers learn from the human-centered design methodology, we play a crucial role in advocating for human beings and the end users. This is especially important when creating physical products and digital interfaces that people will interact with. Designers are the magicians making invisible AI algorithms visible. As Stan Lee’s now oft-quoted wisdom reminds us through his own notquite better-than-human avatar SpiderMan, “With this great power, comes great responsibility.” Helen Armstrong continues,
“When working with machine learning, [you are] opening up a world of manipulation. You are creating interfaces that understand humans as much as humans understand one another. If you’re able to analyze that and respond accordingly, you have a huge power of manipulation. When you have that kind of ability to manipulate humans, some populations will be more impacted than others.” 8
“In the rooms [where AI is being created], designers often represent the humans. We think about users when we design for users. We’re the only one in the room who are really doing that. We need to understand the technology to then go and defend those users. I’m not really sure who else will.” 7
Direct quotation from primary qualitative interview with Helen Armstrong on 31 October, 2018. 7
18
8
Helen Armstrong AIGA Co-Chair, Machine Learning Designer
Designers must understand the role that technology plays in human interaction and larger society. My personal manifesto for designing interactions with AI is as follows: Direct quotation from primary qualitative interview with Helen Armstrong on 31 October, 2018. 8
1 2 3
MANIFESTO FOR DESIGNING WITH AI We must acknowledge AI’s strengths and weaknesses as a technology. Its strengths include simplification of large amounts of structured data, recognizing patterns, and following binary rules. Its vast weaknesses include its reliance on the data that is used to train the AI algorithms, limits of binary thinking, understanding even basic nuance, and the lack of diversity in the field.
Technology is never neutral; AI is never neutral. It is a product of its creators. And if it is created by a homogenous group, it will reflect the values and thought processes of that group. This is why diversity on AI teams is so important. Additionally, with the binary rule-based nature of machine learning, there are always winners and losers. AI should never replace human beings. Instead, it should augment the human experience. The best interactions are ones in which human beings and artificial intelligence can work together to create a better outcome. An example of this comes from a Harvard Medical School and Beth Israel Deaconess Medical research team that developed a deep learning algorithm that attempted to make pathologic diagnoses more accurate. This algorithm was 92% accurate, almost equal to a human pathologist’s typical 96% accuracy. However, with a pathologist and AI algorithm working together, the accuracy rose to 99.5%.9
4
Context and scale must always be considered. Designers are responsible for understanding how technology should be applied. They should always be asking, “Is AI the best solution, given the context and stakeholders involved?” Additionally, designers should always consider the scale of the technology being rolled out; how many people will it impact? 9
Prescott, Bonnie. “Better Together.” Harvard Medical School. 22 June 2016.
19
" When working w learning, [you a up a world of ma You are creating that understand much as human one another." 20
with machine are] opening anipulation. g interfaces d humans as ns understand — Helen Armstrong AIGA Co-Chair and Machine Learning Designer
21
2
Research and Insights
22
What are experts worried about when it comes to artificial intelligence?
Figure 2.1 Photo of programming code, via Unsplash
LET'S TALK ABOUT THE DATA
A PENNY FOR YOUR THOUGHTS, BUT YOUR DATA IS FREE In order to talk about artificial intelligence, we must understand how it is created. Data is the starting point of all AI. Data informs the creation of algorithms and how they learn and identify patterns. Over the last few years, mainstream media has finally caught on with the consequences of data mining. Since the Cambridge Analytica Facebook scandal of the 2016 election, the world has become more attuned to the perils of data collection and how that data can be manipulated. This has created a backdrop of mistrust in technology—rightfully so.
“The outcome is only as good as the data that you feed in.” 7
Alex Sands Founder of Plasticity, Y-Combinator Alum
Recent data scandals in the technology sector, such as the Cambridge Analytica Facebook scandal and Facebook’s security breach in September 2018 which affected 50 million users, have conveyed the extensive consequences of data misuse and vulnerabilities in our existing systems. These Direct quotation from primary qualitative interview with Alex Sands on 26 September 2018. 10
23
events have raised ethical concerns and prompted public debates about the the lack of regulation on collecting and using individuals’ data. According to a Pew Research Center study, 51% of Americans think that large tech companies should be more regulated than they are now. A minority of 3% of Americans think they can be trusted all of the time, while only 25% think they can be trusted most of the time.11 Because data collection is such an integral part of creating artificial intelligence, we must be vigilant about ethical methods of data collection and diversity of the data gathered.
But there is general apprehension around the rise of AI—both from AI experts and the general public which centers around a key question of our time: Will technology drive us apart or bring us closer together? This tension is central to the debate on artificial intelligence. I’ve broken these down into six key issues.
Figure 2.3 Tipped scales of unrepresentative AI workforce
1. An Unbalanced AI Workforce
Figure 2.2 Illustration of Data Collection by Ewa Geruzel, via Pinterest
WHAT EXPERTS ARE WORRIED ABOUT
HELLO HAL, DO YOU READ ME HAL? While Hollywood productions and mainstream media have stirred the pot on fear of killer robots, this is not the main issue that we as humans should be concerned about with artificial intelligence. According to experts, we are a long way away from Westworld or Ex Machina.
In the U.S., the gender breakdown of AI researchers is estimated to be 86.57% men vs. 13.43% women.12 This unrepresentative workforce is one of the key factors that lead to extreme bias in machine learning, a phenomenon that has been deemed “the coded gaze” by MIT Media Lab researcher and Algorithmic Justice League founder Joy Buolamwini. She describes “the coded gaze” as “reflection of the priorities, the preferences, and also sometimes the prejudices of those who have the power to shape technology—”those” meaning affluent white men.”13 A prominent example in Buolamwini’s work has been the identification of a major blind spot in facial recognition software, such as Google’s photo application labeling black people as gorillas. She highlights that this same extremely biased technology is used in contexts such as law enforcement, border control, and hiring—which could have exacerbating effects on society’s existing inequalities. Mantha, Yoan. “Estimating the Gender Ratio of AI Researchers Around the World.” Element AI Research via Medium. 17 August 2018. 13 Buolamwini, Joy. “The Coded Gaze.” Algorithmic Justice League. 12
Mantha, Yoan. “Estimating the Gender Ratio of AI Researchers Around the World.” Element AI Research via Medium. 17 August 2018. 11
24
Samir Saran and Madhulika Srikumar of Observer Research Foundation note the consequences of this bias, writing “The machines of tomorrow are likely to be either misogynistic, violent, or servile.” This is from the intentional design of AI interfaces and experiences, created by a homogeneous group—”a sea of dudes,” as Microsoft researcher Margaret Mitchell has described it. A prominent example of gender bias is manifested in Amazon’s secret AI hiring tool. According to a Reuters report, “It penalized resumes that included the word ‘women’s,’ as in ‘women’s chess club captain.’ And it downgraded graduates of two all-women’s colleges, according to people familiar with the matter.” Though Amazon caught this bias and scrapped the algorithm, we need to ask—how many similar algorithms are being developed?
AI. This bill14 would require large companies to audit their machine learning systems for bias and discrimination, and if found, it would hold them to taking corrective action in a timely manner.15 However, because of technology’s ability to permeate national borders, lawmakers and technological ethicists around the world have called for global guidance around how to ethically build AI. One recommendation comes from Mark Latonero of Data & Society, who suggests using international human rights as a “North Star” to guide the development of the technology. Human rights, in this case, refers to the the Universal Declaration of Human Rights, adopted by the United Nations General Assembly in 1948. The report offers several recommendations, including a call to action for technology companies and researchers to conduct Human Rights Impact Assessments (HRIAs) through the lifecycle of their AI systems. It also places responsibility on governments to acknowledge human rights obligations to protect fundamental rights in “AI policies, guidelines, and possible regulations.”16 3. Biased Data and Algorithms: Garbage In, Garbage Out
Figure 2.4 Governance Illustration via Pinterest
2. Lack of Governance on AI Because of its relative infancy as a technology, there is very little regulation around the development of artificial intelligence. Over the last year or so, mainstream media has begun publishing reports on ethical and moral issues with biased algorithms, and the unintended consequences of AI applications. Lawmakers around the world have been discussing how to best take action. In the U.S., in April 2019, Congress introduced a new bill called the Algorithmic Accountability Act that represents one of the country’s first major efforts to regulate
In an article for World Economic Forum, Alison Kay, Global Vice-Chair of Industry at Ernst & Young states, “With the rise of artificial Intelligence (AI) and machine learning, there is a real risk that we “bake in” prevalent biases into the future. AI and machine learning are fuelled by huge volumes of existing data.” She points out the problem of how algorithms learn today; they rely heavily on historical data. This means that the biases and inequalities that have played out in history are automatically engrained in these new technologies.
“Algorithmic Accountability Act.” Ron Wyden, U.S. Senate. 116th Congress, First Session. 15 Hao, Karen. “Congress wants to protect you from biased algorithms, deepfakes, and other bad AI.” MIT Technology Review. 15 April 2019. 16 Latonero, Mark. “Governing Artificial Intelligence: Upholding Human Rights & Dignity.” Data & Society. 10 October 2018. 14
25
" We have a tendenc designing in neutr fairly. But in fact, d neutral gives us no deal with the actu valleys and twists the landscape we deeply riven with i 26
cy to think ral is designing designing in o gears to ual hills and and turns of live in, which is inequalities." — Virginia Eubanks Author, Automating Inequality 27
In a conversation on “Austerity, Inequality, and Automation” at New York University’s AI Now Symposium, Virginia Eubanks discusses the dangerous integration of algorithms in contexts such as social services and welfare. The algorithms have a triage effect that deem certain populations more “deserving” than others to receive release from detention or “eligible” for benefits like hospice care. She posits, “We have a tendency to think that designing in neutral is designing fairly. But in fact, designing in neutral gives us no gears to deal with the actual hills and valleys and twists and turns of the landscape we live in which is deeply riven with inequalities. So it’s like building a car with no gears, sitting it at the top of a hill in San Francisco and then being surprised when it crashes at the bottom.” It is a call to action for equity instead of equality.
Figure 2.5 Alexa Interaction Illustration by Sergio Baradat for New York Times via Pinterest
4. Designing Interfaces that Perpetuate Racial and Gender Biases Ann Cairns, Vice Chairman of Mastercard, points out the importance of user experience when designing interfaces for AI. She highlights the perpetuation of stereotypes: “We also need to think about the userfacing experience of AI and the inherent sexism of personal assistants and chatbots that are, by and large, women...Often this is defended by consumer research demonstrating that consumers simply prefer a female voice, but how much of this is due to gender stereotypes around the types of role we use assistants for?”17 In this regard, designers have exceptional power over key decisions with large-scale influence. Designers must understand that every choice made in the creation of an interface has embedded values within it. Cairns, Ann. “Why AI is Failing the Next Generation of Women.” World Economic Forum. 18 January 2019. 17
28
“We also need to think about the user-facing experience of AI and the inherent sexism of personal assistants and chatbots that are, by and large, women.” Ann Cairns Vice Chairman, Mastercard
5. Black Boxes and Weapons of Math Destruction Many experts talk about placing too much trust in algorithms; people do not know the process behind how algorithms make decisions and arrive at answers. This phenomenon is known as the “black box,” also called “unexplainable AI.” In her professional career as an academic mathematician turned Wall Street quant, Cathy O’Neil has
experienced firsthand how algorithmic bias leads to financial crises, redlining of opportunities (i.e. college acceptance rates), and exacerbation of inequalities for marginalized groups. She cites three main variables that help define a “weapon of math destruction (WMD)”: opacity, scale, and damage. Opacity is the “black box” phenomenon that is so commonly found in machine learning algorithms. Many times, people do not understand how a specific algorithm transforms designated input into resulting output—and most people do not question it. Scale refers to the size of the system in which the algorithm is implemented. And “damage” refers to the consequences for the groups affected.
6. Automation and the Replacement of Humans Automation and artificial intelligence will greatly disrupt the current state of the workforce. AI expert Kai Fu Lee believes that 40% of the world’s jobs will be replaced in the next fifteen years by robots capable of performing the same tasks. Both blue and white-collar jobs will be affected.18 This research of the AI landscape served as the backdrop for the creation of my design provocations. Reisinger, Don. “A.I. Expert Says Automation Could Replace 40% of Jobs in 15 Years.” Fortune. 10 January 2019. 18
29
3
Overview of Methodologies
30
What methods were used in the creation of this work?
Figure 3.1 Photograph from co-creation workshop
PROCESS
BACKGROUND
VARIETY OF RESEARCH METHODOLOGIES
INTEGRATION OF PREVIOUS EXPERIENCES
A variety of methodologies was used in the creation of this work, including interviews with numerous subject matter experts, user research, and a crossindustry co-creation workshop. From the insights gathered, four speculative design provocations were created that prioritize the value of emotional honesty.
This work is informed by my previous academic pursuits and personal lived experiences. Drawing from my undergraduate degree in Psychology, I have utilized my knowledge in developmental psychology, cross-cultural psychology, abnormal psychology, and learning theory.
This work interrogates the status quo of the AI space and suggests pathways to use intersectional thinking to imagine new applications for AI in the psychology space. Ultimately, it intends to bridge the everwidening gap between technology and humanity.
Personally, from my personal journey through depression and anxiety, I have also experienced the care of clinical practitioners with fourteen years of psychotherapy and therefore, have observed first-hand what happens in the clinical practice of therapy. Both this academic foundation and firsthand experience informed my design research and creation of design prototypes. 31
Figure 3.2 Systems Map of Artificial Intelligence, outlining key stakeholders and opportunities for design intervention
32
EXPLORING THE DOMAIN
UNDERSTANDING ARTIFICIAL INTELLIGENCE
Before diving into how to manipulate artificial intelligence with specific design decisions, it was imperative to understand what AI is and how it is created. Who are the core stakeholders building this technology? Where are the opportunities we can identify for intervention? By creating a simplified systems map, this helped me boil down and define AI at a base level.
33
34
SECONDARY RESEARCH
BOOKS, SCI-FI, AND POPULAR MEDIA Secondary research included reading books, white papers, and other publications by machine learning engineers, mathematicians, technology researchers, and international organizations. Additionally, to gain a better understanding of popular media’s portrayal of AI, I watched a large amount of sci-fi films and series.
THEORY OF CHANGE
IDENTIFICATION OF INTERVENTION SPACES To begin to understand the systems at play, a theory of change was constructed through the creation of a transformation map. Then, the problem was diagnosed at a systemic level before drilling down into a more specific, individual acute problem. See diagrams on the next two spreads.
35
Figure 3.3 Theory of Change / Transformation Map, identifying possible intervention spaces
36
37
Figure 3.4 Problem Definition Hypothesis Diagram
38
39
PRIMARY RESEARCH
DISCUSSIONS WITH SUBJECT MATTER EXPERTS I had the opportunity to talk with over 30 subject matter experts on the topics of artificial intelligence, bias, inclusivity, and technological socialization. This included conversations with machine learning engineers, ethical technologists, behavioral and developmental psychologists, activists, and PhD candidates who bring intersectional thinking into the world of artificial intelligence.
40
41
42
PRIMARY RESEARCH
CO-CREATION WORKSHOP A co-creation workshop was created and facilitated to explore the embedded values in commonly used AI-powered products and services. On November 18, 2018, thirteen professionals from across seven industries gathered to discuss the future of artificial intelligence. For more, see page 70.
TALKING WITH REAL PEOPLE
USER RESEARCH User research included qualitative surveys distributed publicly through intercept interviews and digital channels, public design intervention, and qualitative interviews. See more on public design intervention on page 104.
43
4
Initial Sacrificial Concepts
44
Using provocative prototypes ("provotypes") to spark discussion, debate, and feedback from users
Figure 4.1 Adapted version of survey data from Gallup and Northeastern University, via The New York Times19
INITIAL DESIGN FRAMEWORK
THE INTERSECTION OF AI, RACE, AND GENDER
For my early stage prototypes, I focused on pushing three commonly used existing products and services through the lenses of gender and race—sometimes the intersection of both. HOME VOICE ASSISTANTS
Initially, I had planned to focus on the most common uses of artificial intelligence, exploring the obscured and embedded values. I was interested in how these products and services would, at scale, socialize our society by perpetuating existing racial and gender biases and impact the mental models of how we perceive one another. One example of this was the impact of home personal assistants on children; there were deliberate design choices that affected the creation of these assistants (i.e. Alexa’s female voice) that would perhaps affect the way that children learn to perceive women. Northeastern University and Gallup survey via The New York Times, “Most Americans See Artificial Intelligence as a Threat to Jobs (Just Not Theirs).” 6 March 2018. 19
ROOKEE: VERSION ONE According to a September 2018 Nielsen report, nearly a quarter of American households own a smart speaker.20 As of 2017, Amazon significantly leads the race for market penetration, holding almost 72% smart speaker market share installed base.21 The use of smart speakers equipped with voice assistants has become quotidien; 42% of smart speaker owners say that their smart speakers are “(Smart) Speaking My Language: Despite Their Vast Capabilities, Smart Speakers Are All About The Music.” Nielsen Insights. 27 September 2018. 21 Smart Speaker Consumer Adoption Report. Voicebot.ai. Sponsored by Rain and Pullstring. March 2018. 20
45
Figure 4.2 Schematic of Rookee: Version One which consists of bluetooth googly eyes that can be attached to an Amazon Echo
essential to their everyday lives, while 44% agree that having a smart speaker has helped them spend more time with people in their household.22
“Today’s toddlers are the first generation to grow up without any memory of the world before ubiquitous artificial intelligence devices in homes." 23
Smart speakers have become integrated into family life. Though the majority of owners use them for playing music or searching for real-time information such as traffic and weather, their seemingly banal usage can have profound impacts on children growing up with this technology. The Smart Audio Report. National Public Radio and Edison Research. 23 Elgan, Mike. “The case against teaching kids to be polite to Alexa.” 24 June 2018. 22
46
Children take for granted the human-like digital voice assistants found in their homes. They interact with them curiously, asking them questions and commanding them to play music. Having the ability to freely give orders to a digital voice assistant with the voice of a woman can have consequences. In a psychological study at the University of Washington, researchers concluded, “Because these robots can be conceptualized as both social entities and robots, children might dominate them and reify a masterservant relationship”.24 These potential consequences inspired the creation of the first version of Rookee. This initial prototype consists of a parent-facing app connected to a child-facing smart object, which comes in the form of bluetooth googly eyes that can be attached to an Amazon Echo device. The bluetooth Googly eyes attached to the Echo appear to be superficially adorable, but they Kahn, Peter H., Heather E. Gary, and Solace Shen. “Children's Social Relationships With Current and Near Future Robots.” University of Washington. Child Development Perspectives via Wiley Online Library. 6 December 2012. 24
actually serve a significant higher purpose: they humanize the device. Using the already built-in natural language processing, they respond to the child’s speech and tone. If the child is rude to it, Rookee will roll its eyes. If they continue their behavior, Alexa will become angry and eventually lock until the child apologizes. Because this adds a level of interaction and accountability for the child, it transforms Alexa’s personality to be more responsive, counteracting the bias that women should be obedient. The data dashboard includes month-over-month progress of your child’s behavior, while also visualizing daily interaction patterns. I continued this exploration at a later time, which you can find on Page 66. But after delving into further research on children’s behavior with voice assistants, it seems that focusing on children being “polite” is a “red herring” distracting from other, more important issues. RETHINKING RIDESHARING
HUES: A RIDESHARING COMMUNITY FOR WOMEN
to NYU’s Rudin Center, women in NYC spend on average an extra $26 to $50 on transportation per week for safety reasons.25 Many choose to use taxis or a ridesharing service. Unfortunately, both drivers and users experience sexual harassment because these companies prioritize profit over safety. Typically, women’s priorities, such as safety and security, are left out of the equation when building technologically-focused solutions. Bloomberg Technology spotlights “the dark realities women face driving for Uber and Lyft,” highlighting the detrimental effects of sexual harassment that female drivers experience—and how it negatively impacts their paycheck and economic livelihood.26 A 2018 CNN investigation uncovered that 103 Uber drivers were accused of sexual assault or abuse.27 This is not surprising due to the fact that both Uber and Lyft skimp on background checks, using more lax regulations than the traditional taxicab industry does. This deprioritization of safety is an example of history repeating itself within the transportation space. Women are frequently put at risk on the roads. When a woman is in a car crash, she is 47% more likely to be seriously injured, 71% more likely to be moderately injured—even when researchers control for factors such as height, weight, and seatbelt usage. This discrepancy is a result of how cars are designed— and for who.28 In this same report for The Guardian, Caroline Criado-Perez writes, “Designers may believe they are making products for everyone, but in reality they are mainly making them for men. It’s time to start designing women in.”
Figure 4.3 Illustration of the pink tax by NY Gal, via Pinterest
The pink tax is a form of price-based gender discrimination, forcing women to pay more on everyday products and services. This pink tax exists within transportation and mobility. According
“The Pink Tax on Transportation: Women’s Challenges in Mobility.” Rudin Center. New York University. November 2018. 26 Wang, Selina. “The Dark Realities Women Face Driving for Uber and Lyft.” Bloomberg: Technology. 18 December 2018. 27 O’Brien, Sara Ashley, Nelli Black, Curt Devine and Drew Griffin. “CNN investigation: 103 Uber drivers accused of sexual assault or abuse.” 30 April 2018. 28 Criado-Perez, Caroline. “The deadly truth about a world built for men – from stab vests to car crashes.” The Guardian. 23 February 2019. 25
47
" Designers may b are making prod everyone, but in are mainly maki for men. It's tim designing wome 48
believe they ducts for n reality they ing them me to start en in." — Caroline Criado-Perez Co-Founder, The Women's Room
49
SHIFTING THE SYSTEM
PRIORITIZING SAFETY OVER PROFIT What if ridesharing had been designed by women? I imagined that the priorities would shift from maximizing revenue to thinking through the unintended consequences on safety—and, creating a business model that would move safety to the front seat. To explore this idea, I envisioned a ridesharing community for women called HUES. HUES features lower rates for users at certain times, such as after dark, during which women may feel less safe. With HUES, safety becomes a feasible choice. Because this is a women-forward community, it attracts women drivers. HUES drivers are known for being multi-lingual to serve diverse communities, as the company specifically recruits women of color. HUES is accountable for users’ safety. At the end of a user’s trip, it asks if they arrived home safely—a question women are often asked by their friends and family. This feature would be supported by a 24/7 customer service line.
50
51
52
ADDRESSING BARRIERS
ACCESSIBILITY AND AFFORDABILITY While most ridesharing vehicles congregate in financial centers to maximize profit, HUES’ algorithm is optimized to financially reward drivers who purposely prioritize neighborhoods underserved by transit—where there is often significant demand for a ride but not enough supply of drivers. This is designed to address both affordability and accessibility. Of course, some of these ideas may sound idealistic and not grounded in the existing reality of the male-dominated capitalist framework of today, so I worked through an initial business model canvas to address unanswered questions.
53
Figure 4.6 Business Model Canvas for Hues ridesharing service
54
55
This was an initial exploration and has not been completely refined, but in this version, the model involves empowering marginalized groups who are facing high unemployment and who are usually left out of the gig economy. While this prototype did receive positive feedback from potential users, it was not necessarily the direction that I wanted to go in. I do not believe that we can create a better world just by attempting to implement the opposite of what currently exists. By providing an alternative outside the system, it does not necessarily actively address the problems of the current system. We can be more creative in thinking of alternatives. Additionally, there are also several questions that arise with the creation of HUES. Would it even be possible to create a ridesharing community specifically for women? Title VII of the Civil Rights Act of 1964 prohibits discrimination on the basis of sex, among other characteristics. Other precedents exist for companies that have attempted to be female-first. One example is The Wing, a company that markets themselves as “a network of community & work spaces designed for women.� They have faced their fair share of legal battles. A SYSTEMIC SHIFT WITH SPECULATIVE DESIGN
TRAFFIC LIGHT 2.0: AI ON THE ROAD Currently, AI is used many ways on your daily commute. Smart cameras use image recognition to monitor traffic patterns. AI-powered apps like Google Maps and Waze help you beat traffic by collecting large amounts of traffic data, finding patterns, and using algorithmic optimization. On the road, there is an inherent triage effect that takes place. During rush hours, there is usually 56
limited physical space available on streets to navigate through to a destination, especially to business centers, financial hubs, or technological hotspots. This creates a bottleneck, in which only certain people are able to pass through. In these situations, aggression is frequently used to ensure that a driver makes it through first. For those privileged enough to squeeze in, they automatically put the others as a disadvantage. Using this phenomenon to our advantage, what if we used the daily commute as a way to counteract the overwhelming male bias in the AI industry? I brought this question to life with a speculative design product. This redesigned traffic light segregates drivers according to the limited, binary construction of gender: female and male. It would be located on onramps before entering the highway. It flips gender privilege by giving female drivers an advantage of going first, while male drivers are required to wait ten seconds for their turn. While the idea for this may seem nonsensical, it gives women one small advantage in a world designed by and for men—in which the majority of systemic privileges unjustly cater towards men. Specifically, for its first phase, I envisioned it being implemented on I-280 to Silicon Valley, where a lot of technology, including artificial intelligence, is being developed. Rolled out system wide, we can imagine the HOV lane being converted into an FOV lane. After all, we are simply giving men what they want. Since so many of them complain about women being horrible drivers anyway, why not separate women drivers into their very own lane?
There have been reports of massive slowdowns on I-280 South in the Bay Area, heading from San Francisco to Cupertino and the rest of Silicon Valley. Traffic has been backed up for miles due to the implementation of new on-ramp signal. This new traffic light separates female and male drivers into two distinct entry times. Female drivers are permitted to enter the freeway first and drive in a new dedicated FOV lane (femaleoccupancy vehicle lane), while male drivers will experience an 8-10 second delay and be forced to join the rest of the congested traffic pattern. Traffic to Silicon Valley has exploded over the past decade, but these traffic lights have exacerbated it to a new level. This has left office floors empty, even by noon. Somehow, all female tech workers are still
arriving to work early or on time. It seems like these traffic lights have made the gender gap in the tech industry even more visible. This is especially true in the artificial intelligence divisions of major tech companies, in which the gender gap is 85% male vs. 15% female. Strong reactions on social media have surfaced with the rise of a new #methree campaign. Garrett A., an Apple software engineer, tweeted, “Late again to work today. If this isn’t workplace discrimination, I don’t know what it is. #methree.” Ryan Benson, an A.I. researcher at Google, expressed, “I’m a nice guy...so why are they punishing all of us? #notallmen #methree.”
57
Figure 4.7 Illustrated Sketch of HOV lane being converted to an FOV lane (Female Occupancy Vehicle)
58
59
5 A Shift in Thinking
60
To what extent was I upholding the very system I wished to critique?
Figure 5.1 Photograph of ceiling, portraying breaking through the black box of AI, image via Unsplash
THE UBIQUITY OF ARTIFICIAL INTELLIGENCE
THINKING OUTSIDE THE BLACK BOX After creating my initial prototypes, I received critique from roboticist and industrial designer, Matthew Borgatti, who asked if I wanted to tie all of my designs to existing products and services. At the beginning, I had believed that if I grounded my prototypes in products and services like Amazon Alexa or Uber, people would have an easier time grasping my ideas. But after our conversation, I realized that I was constricting myself within existing frameworks— which was the exact opposite of what I wanted to do. To what extent was I upholding the very system I wished to critique?
I revisited my notes from conversations with subject matter experts at the beginning of this thesis journey and two points stood out. The first:
“AI is unregulated and not thought out. We’re changing the world around us—possibly without understanding the world around us. In some cases, it’s irretrievable.” 29
Jennifer Mankoff Professor of Human Computer Interaction, University of Washington
Direct quotation pulled from primary qualitative interview with Jennifer Mankoff, conducted on Friday, 5 October 2018. 29
61
These quotes suggest that we are in a pivotal moment in the development of artificial intelligence. Jennifer Mankoff suggests that given the lack of governance, it might be too late for AI. There are already detrimental algorithms in place that will unfairly determine the future of humanity, disproportionately affecting historically marginalized groups.
“We have designed ourselves into this mess. We are perfectly capable of designing ourselves out.” Jennifer Rittner Principal, Content Matters
does not make sense to use technology. We see this in philanthropic causes with seemingly good intentions such as the fatally imperialistic and opportunistic nonprofit One Laptop per Child (OLPC), created by MIT Media Lab founder Nicholas Negroponte. In 2005, when the nonprofit was founded, laptops were an inaccessible technology to many because of its high cost barrier.Some politicians deemed this “the digital divide.” OLPC touted the mission of closing this divide by distributing a lower cost laptop to developing countries, conversationally deemed “the green machine” or “the $100 laptop.” Unfortunately, the utopian hype ended up morphing into a manifest destiny western technology; it became all about distributing computers instead of addressing any real problem of class. OLPC is a perfect example of technology-centered design gone wrong; it is “a symbol of tech industry hubris, a one-size-fits-all American solution to complex global problems.”30
Humans are behind all of this. And we, as humans, can change. Artificial intelligence can be a catalyst for that change. But we cannot leave it to large technology companies and the extremely unbalanced workforce of machine learning engineers. A MUCH-NEEDED SYSTEMIC SHIFT
FROM TECHNOLOGYCENTERED DESIGN TO HUMAN-CENTERED DESIGN In the age of Silicon Valley and technological entrepreneurship, there is a tendency to use technology as a blanket solution for wicked problems. This, to me, is highly irresponsible. There are so many cases of individuals and companies integrating technology into situations in which it 62
Figure 5.2 Broken OLPC Laptop, image via The Verge
We must be vigilant in illuminating this attitude of “tech industry hubris” in applications using any field of artificial intelligence. When I asked Ben Green, a PhD Candidate in Applied Math at the School of Engineering and Applied Sciences at Harvard University, what he was most fearful of concerning AI, he said, “We risk seeing the world through the Robertson, Adi. “OLPC’s $100 laptop was going to change the world. Then, it all went wrong.” The Verge. 16 April 2018. 30
lens of technology…and putting machine learning as a band-aid on all of these problems.”
political extremism (Hong & Kim, 2016),32 and fuel the "echo chamber" phenomenon.
What’s needed is a shift from technology-centered design to human-centered design. While I don’t always agree with IDEO’s methodologies, the basic intentions behind their “human-centered design” approach marks the first steps of much-needed change in creating technological solutions to address human issues. IDEO writes that humancentered design is “all about building a deep empathy with the people you’re designing for; generating tons of ideas; building a bunch of prototypes; sharing what you’ve made with the people you’re designing for; and eventually putting your innovative new solution out in the world.” This iterative process should be applied to technological solutions.
Again, the question arises: will technology drive us apart or bring us closer together? This tension is central to the debate on artificial intelligence. In a 2014 interview with BBC, theoretical physicist Stephen Hawking warns, “the full development of artificial intelligence could spell the end of the human race.” But Sundar Pichai, CEO of Google, addresses controversies about AI at a town hall in San Francisco in January 2018, claiming “AI is one of the most important things humanity is working on. It is more profound than…electricity or fire.”
To begin to imagine new applications for AI, we must be mindful of the contexts and predetermine the unintended consequences of adding technology into the equation. Technology and innovation are a double-edged sword, with benefits and consequences. It is important to think through both.
Humanity's Relationship with Technology Many people would argue that technology is driving us apart and impacting human behavior and relationships in detrimental ways. Numerous studies point to the negative impact of smartphones on mental health. Psychological studies have uncovered the alarming correlation between smartphone addiction and depression (Alhassan et al, 2018).31 In addition to harmful effects on mental health, other psychological studies highlight the effects of social media on political polarization. From a quantitative stance, politicians with more extreme ideological positions have more Twitter followers. Additionally, social media may contribute to heightened levels of Alhassan, Aljohara A, et al. “The Relationship between Addiction to Smartphone Usage and Depression among Adults: a Cross Sectional Study.” BMC Psychiatry, BioMed Central, 25 May 2018, www.ncbi.nlm.nih. gov/pmc/articles/PMC5970452/. 31
THE KEY QUESTION
WHAT IF AI CAN MAKE US MORE HUMAN? Because of artificial intelligence’s capacity to emulate human behavior and intelligence, it can be used as a tool to better understand our own psychology and make us more human. In many ways, AI is a mirror of ourselves. In this exploration, I focus on emotional connection because as humans, emotion is an essential part of our existence. What if AI could make us more emotionally connected to ourselves and each other? To explore this central thesis question, I use a tool called speculative design to create prototypes and generate ideas. Designers use it to speculate about possible futures. It can be interpreted as the intersection of design and science fiction, hinging on the power of the question, “What if?”
Hong, Sounman and Sun Hyoung Kim. “Political polarization on twitter: Implications for the use of social media in digital governments.” Government Information Quarterly, Volume 33, Issue 4, October 2016, pp. 777-782. https://doi.org/10.1016/j.giq.2016.04.007 32
63
What if artifici can make us m
64
ial intelligence more human?
65
A More Emotional AI (Literally)
6 66
What if AI itself were more emotional?
Figure 6.1 Photograph of ceiling, portraying breaking through the black box of AI, image via Unsplash
A KEY QUESTION
PROJECT BACKGROUND
DOES AI ITSELF NEED TO BE MORE EMOTIONAL?
ALEXA, HOW BIG IS YOUR FAMILY?
As a starting point, I explored a more emotional AI in a literal way. I needed to answer the question, if AI can help us feel more emotionally connected to ourselves and each other, does AI itself need to be more emotional?
Rookee is intentionally tied to the Alexa ecosystem, capitalizing on Alexa’s existing scale and adoption. Smart speaker penetration has exploded over the last year. At the end of 2018, almost 41% of Americans owned a smart speaker. These speakers, as well as other devices, are programmed with speech-based assistants such as Amazon Alexa or Google Home Assistant.
I revisited my early prototype Rookee, in which bluetooth googly eyes were added to an Amazon Echo to help Alexa exhibit human emotion. The following research and iteration is built on the previous prototype found on Page 46. As a reminder, because of its existing scale and adoption, Amazon Alexa was used as a focal point to begin this investigation.
Using natural language processing, these voice assistants are able to understand human speech— specifically, commands. On the surface, they are
Direct quotation pulled from primary qualitative interview with Jennifer Mankoff, conducted on Friday, 5 October 2018. 29
67
created to be servile and to promote convenience within everyday life. However, they have also infiltrated the traditionally private realm of home environments by being disguised beacons for data collection. Because Amazon is drastically leading the way with 70% of smart speaker market share, I chose to use Alexa as a focal point for my investigation into the embedded values behind these assistants. According to a TechCrunch report, there are now over 100 million Alexa-enabled devices installed in the U.S.33 The scale of this technology has the potential to be a socializing force. Knowing this, we must be vigilant and aware of the consequences.
According to Amazon’s developer website, “A voice user interface (VUI) allows people to use voice input to control computers and devices.”35 Let’s examine the assumption that people should be able to control computers and devices. As people, we want to establish some type of hierarchy between human beings and the technological devices we create. But what happens when there are specifically defined characteristics given to Alexa?
While the majority of Alexa users use the assistant for seemingly innocuous and delightful applications such as playing music, checking the weather, or asking random questions, let’s dig deeper into how these functions are happening.
Using Voice Interface as a Point of Interaction Voice user interface (VUI) allows users to interact with a system using voice or speech commands.34 The primary advantage of VUI is that it enables users to provide limited attention to the system. They do not need to maintain visual or kinesthetic attention to the task at hand. In many ways, it is like having a meaningless conversation with an assistant whom one may pay to organize a schedule or deliver on simple tasks. Because this spoken communication is something that comes naturally to many human beings, users have a tendency to use a typical interpersonal communication style and interact with these devices using human speech and language. But there is a significant disconnect between how a human being would respond versus how a VUI responds.
Figure 6.2 Photo of Amazon Echo with Alexa, photo via Unsplash
Alexa’s Voice Alexa’s voice sounds female. Of course, it is intentionally designed this way In a conversation with Business Insider, Daniel Rausch, the head of Amazon’s “smart home” division stated, “We carried out research and found that a woman's voice is more 'sympathetic' and better received.”36 Amazon progressed with this female voice in hopes that it would inspire its users to make more purchases in a non-threatening, helpful way—much like a secretary would do for their manager. But what are the unintended consequences of Alexa having a female voice? It perpetuates ingrained gender bias about women and their servility. Ann Cairns, Vice Chairman of Mastercard, highlights the perpetuation of stereotypes. She writes that we
“What Is a Voice User Interface (VUI)?” Amazon Alexa. Hannah Schwär and Ruqayyah Moynihan in conversation with Daniel Rausch. “There's a clever psychological reason why Amazon gave Alexa a female voice.” Business Insider. 35
Perez, Sarah. “Smart speakers hit critical mass in 2018.” TechCrunch. January 2019. 34 “Voice User Interfaces.” Interaction Design Foundation. 33
68
36
“need to think about the user-facing experience of AI and the inherent sexism of personal assistants and chatbots that are, by and large, women.”37 In this regard, designers have exceptional power over key decisions with large-scale influence. Designers must understand that every choice made in the creation of an interface has embedded values within it.
Everyone, even our virtual assistants, which are products that a specific group of homogenous human beings created, must be experts. If they make a mistake, we become frustrated with them— so much, that they have been programmed to apologize.
Alexa’s Name
EXPLORING EMBEDDED VALUES OF VOICE ASSISTANTS
Rausch also shared that, "Alexa is a reference to the library of Alexandria. In antiquity, it was a library that could answer any question and hosted all the collective knowledge of the world at that time."
ALEXA, WHAT ARE YOUR POLITICAL BELIEFS?
What does it mean to have the collective knowledge of the world at that time? Whose knowledge did the library hold? If there’s one thing we need to be mindful of when studying history, it’s that the victors always get to tell the story, and we must be critical of that collective knowledge.
Rookee was initially inspired by a co-creation workshop that I created and facilitated on November 8, 2018, titled “The Future of Everything.” Participants explored the embedded values in ubiquitous AIpowered products and services, including Amazon Alexa.
Language and Syntax for Interaction with Alexa
In order for rising technologies to be more equitable, we need more people involved in discussing the possibilities and consequences. AI is already impacting many sectors including healthcare, criminal justice, hiring and recruitment, and transportation. These sectors are already heavily impacted by racial and class disparities, ignoring those factors in the tech we develop could maintain, and in some cases widen, the gap of the inequitable status quo.
To further complicate the matter, let’s consider the specific syntax needed to interact with Alexa. In order for Alexa to activate, the user must say “Alexa.” There is no greeting needed. But what if we were to change the nature of this interaction by requiring the user to say a simple “hello” or “hi” or any greeting offering acknowledgment of another interactive presence. The current interaction is to designed to be so that Alexa acts a personal assistant or intern. The user does not need to show it an ounce of respect to get answers or have her help. When Alexa does not know the answer, she apologizes. The assumption that an apology is needed is indicative of the context in which she was created. It says a lot about what we expect of technology, that it should solve our problems. It is a solution. It also conveys our lionization of expertise and the culture around it.
Thirteen professionals from across seven industries gathered at a workshop called “The Future of Everything” to discuss artificial intelligence. This was a 2.5 hour workshop that I created and facilitated. In it, participants explored the embedded values in today’s AI-powered products and services like Amazon Alexa, Spotify’s Discover Weekly, and customer service chatbots.
Cairns, Ann. “Why AI is failing the next generation of women.” World Economic Forum. 18 January 2019. 37
69
PART ONE OF WORKSHOP
ILLUSTRATING ALEXA Each participant was given a branded workbook, in which they answered questions about Amazon Alexa such as, “What are Alexa’s political beliefs?” Participants were instructed to draw what Alexa would look like based on the sound of Alexa’s voice. Here are some of the illustrations.
Figure 6.3 Participants' illustrations of Alexa as a human being from co-creation workshop
70
Figure 6.4 Branded workbooks designed by Evie Cheung for co-creation workshop
After listening to Alexa’s voice, participants drew illustrations of what Alexa would look like if they were a human being, and then were asked several questions about Alexa’s race, political beliefs, and personality. From the information gathered, all participants responded that Alexa was a white woman. Participants also believed she couldn’t think for herself and was pushing a libertarian agenda.
In a vacuum, these findings are pretty hilarious. But what if you are a child growing up with this technology in your household, interacting with it on a daily basis? What are the socializing impacts that Alexa may have on you? Is Alexa able to transfer her embedded values onto users—particularly those that are more impressionable (i.e. children)? 71
72
73
74
Figure 6.5 Conversation topic cards and results from second part of co-creation workshop
PART TWO OF WORKSHOP
UTOPIA / DYSTOPIA
In the second part of the workshop, participants were divided into small groups. Each group was given a ubiquitous AI-powered product/service and was asked to ideate on: 1. Best Case Scenarios 2. Worst Case Scenarios 3. Potential Solutions for Worst Case Scenarios.
75
THE PARADOX OF INCLUSIVITY
ROOKEE: VERSION TWO
Additionally, Loree uses a female voice, which perpetuates the stereotype of a female caretaker.38 Though Loree perpetuates some systematic biases, it is an interesting examination of how VUI may be used in creative ways for preservation of culture. Inspired by Moment’s project, and my own research, I explored what it would mean for Alexa to have different language, dialect, and accent options. How might we make voice UI more representative of our diverse world?
Creating Rookee: Version Two
Figure 6.6 Loree by Moment Design
During the summer of 2017, design consultancy Moment explored “a VUI concept that might help parents raise their children.” In their initial exploration, the Moment team “asked: How could we help celebrate culture and diversity—rather than suppress it—through the help of a VUI?” Through a design sprint, the team created Loree, which is “an artificially intelligent companion that uses stories to redefine the way parents pass down their native culture to their children...She’s a natural conversationalist, fluent in all languages, and knows how to adjust her approach as a child grows and learns. With Loree present, parents no longer have to bear the sole burden of passing down their culture alone.” In some ways, Loree is exciting; it uses VUI technology for preserving minority cultures and offers parents support among a stressful assimilation process. But in other ways, Loree falls short. Assuming that VUI will simply augment parent-child dynamics and facilitate cultural transfer is fatally optimistic at best. It is missing some core research around identity formation and how children will actually grow up to be “empowered.” “Voice UIs Could Help Promote Diversity.” Moment Design. Medium. 24 August 2017. 38
76
Rookee had the potential to be a conversational AI that uses a voice UI and app to emulate any family’s language, voice, and accent, wherever they may come from. Rookee is specifically targeted to children, ages 3 - 7 years old, as that is when a significant amount of language development takes place as children transition from home life to school. In the setup, users are prompted to choose the specific language, dialect, and accent they would like their voice assistant to communicate in. Users are also given agency to change their digital assistant’s voice at any time. Thus, parents would have the capability of exposing their children to different languages, dialects, and accents— socializing them to understand the diversity of a global society. Of course, this assumes an idealistic view on the proactive role that parents would have to take. Unfortunately, it is also likely that the people who would learn the most from Rookee would also probably be the least likely to use it. There are certain assumptions for who Rookee’s first users would be. Perhaps it would be the white middleclass families who are already more likely to own Amazon Echos, or multicultural families who would like to show their children that there are multiple voices that are the “norm.” Further research is needed to better understand Rookee’s core user groups.
77
78
Figure 6.7 Rookee onboarding screens allow for customization of language, dialect, and accent
Figure 6.8 In the app, users are able to change the voice of their Alexa once a week. But what happens when we take that agency away?
79
This prototype begs the question—does making AI more inclusive make human beings more inclusive? If it is welcoming to a variety of users, perhaps it would encourage the adoption from new audiences and diversify the user base. However, it is important to ask the question—if everyone in the world had their voice assistant mimic their exact language, dialect, and accent, would it be a more inclusive world if technology is an exact clone of them? Instead, it could foster a more siloed society, in which language barriers remain obstacles. Kwame Anthony Appiah, a British-born GhanaianAmerican philosopher and cultural theorist, writes about this tension in his book Cosmopolitanism. He interrogates if it is possible to preserve the self and simultaneously support the collective community of a global world. While he believes in having conversations across borders and boundaries, he does not answer the question of how different groups may negotiate when tensions arise. In some ways, by inserting inclusive elements into a dominant technology, it makes that technology more welcoming and accessible to historically marginalized groups. Unfortunately, that is also where its limits lie. It is a passive inclusion of diversity and does not necessarily encourage users to be more inclusive of others by fostering interaction across groups. It inserts representation, but does not enforce that representation in any way. Unfortunately, it does not directly address the many biased socialization patterns occur just simply by owning AI technology products. But if we turn up the heat on this, we could experiment with taking the parent’s voice-changing agency away. Instead, the built-in algorithm would automatically decide a new accent for the family every week. The app could then collect data on how each user responds to different accents—say, a Jamaican woman versus a British man, so that users could begin to understand and address their individual biases. 80
In its current state, Rookee is a consumer-facing design provocation that is an add-on to another consumer-facing product, Amazon Echo. But if the inclusion factor lies in the algorithm, it would then perhaps force the machine learning engineers to have conversations about diversity and inclusion. Because of its integration into the technology, this discussion would need to infiltrate the demographically homogeneous offices and companies where it is being created. It would have a potential butterfly effect on these developers to think about the presumed norms that exists within the majority of technological products and services.
IMPLICATIONS FOR CHILD DEVELOPMENT
ALEXA, DINNER IS READY Alexa and other home virtual assistants are poised to become an integrated part of family life. For children growing up with this technology, how will it shape the way they learn about and interact with the world? Robbie Gonzalez of Wired examines this phenomenon and poses that the parents of today are potentially worried about their kids “learning to communicate not as polite, considerate citizens, but as demanding little twerps,” as a ramification of growing up with digital voice assistants such as Amazon’s Alexa or Google’s Home Assistant. But this concern may actually be a “red herring” for more significant debates at hand. Justine Cassell, director emeritus of Carnegie Mellon’s Human-Computer Interaction Institute, posits that ultimately, it’s not about whether a child says “thank you”—it’s about teaching empathy. And given the constraints of artificial intelligence, how would a voice assistant go about doing that when it itself cannot comprehend empathy? Does an AI need to understand empathy for it to be able to teach empathy?
81
AI as Part of the Family
7 82
What if AI could say the things that I cannot?
Figure 7.1 Photograph of ceiling, portraying breaking through the black box of AI, image via Unsplash
TECHNOLOGY AND HOME
AFFIE: A SMART SPEAKER AS PART OF THE FAMILY We’ve talked about smart speakers, but what’s interesting is the role that they can play in families. 44% of owners agree that having a smart speaker has helped them spend more time with people in their household. How might we increase that feeling of connection? I focused on the dinner table, since eating together is an important routine for families across cultures. It’s a time to communicate, but depending on family dynamics, it can also be stressful. Sometimes this stress comes in the form of an awkward silence at family dinner when a lie is spoken, even when everyone knows the truth. Or the hesitation and embarrassment that comes
with sharing an unpleasant experience with family members for fear of judgment. The pressure to gloss over conflict when it arises. We’ve all been there. Affie is a smart speaker disguised as a vase for families who may have trouble with communication due to cultural barriers. It sits as a centerpiece in the middle of a family meal, blending into the tablescape. Prior to sitting down to dinner, users can use the Affie app to type in their thoughts and send them to the Affie device. At the beginning of the meal, Affie illuminates with a subtle glow and plays a soothing activation sound. Then, through the speaker, it will play the pre-programmed thoughts in the users’ voices. Affie’s silhouette and colorful pattern are influenced by traditional blue and white Chinese porcelain. However, this is only one possible rendition of it. Because the vase is a ubiquitous cultural artifact , we can also imagine the different manifestations of Affie that could fit to each family. 83
84
FORM AND FUNCTION
WHY A VASE AT THE DINNER TABLE? Initially, I sketched several ideas for artifacts that could be found within the context of the dinner table during a family meal. I ultimately decided on a vase. This is because the vase has a long history of prominence across cultures. It is an open container that can be used to hold cut flowers, liquids, or other items. Vases are common across the world and generally have a similar shape. In the case of Affie, instead of being solely a decorative adornment, the device also holds human thoughts and emotions. It becomes a vessel for human storytelling. Affie has a deliberate place at the dinner table, as this meal occurs at the end of the day when individuals come together to refuel on sustenance— both physically and emotionally. Eating together is an important part of family life that can help strengthen relationships. In many households, eating dinner together is a routine. However, dinnertime can also be uncomfortable because of conflicts or things left unspoken. By adding Affie into the dinner equation, there is potential to transform a routine into a ritual of emotional catharsis.
85
86
87
CULTURAL CONTEXT
CROSS-CULTURAL COMMUNICATION Growing up as a first-generation Asian American, my sister and I often had trouble communicating with my parents, who were born and raised in Hong Kong. I remember the simple phrase “How was your day?” sparking an argument in our family. The mundane question didn’t exist in our household. As immigrants from Hong Kong, my parents rarely talked about their feelings. They simply lived with the everyday stressors of assimilation; stoicism was their way of staying afloat. I was always amazed at how my non-immigrant American friends’ families seemed to talk openly about their emotions and the banalities of everyday life—joyful occurrences, minor annoyances, and routines. They always had something to talk about. Because my parents never asked me how my day was, I believed that they didn’t care. This led to many painfully silent dinners, in which no one said anything. Psychological studies summarize this gap in communication. On one side, there is low-context communication, typically used in western cultures, in which people tend to be more dramatic, open, and direct. On the other side, there is high-context communication, usually found in Asian cultures, in which people are more indirect, use feelings to guide behavior, and use silence.39 The name Affie comes from an abbreviated version of the word “affirm,” which means to asserts strongly and publicly.
Park, Yong S., Bryan S. K. Kim - “Asian and European American Cultural Values and Communication Styles Among Asian American and European American College Students”. 2008. Cultur Divers Ethnic Minor Psychol. 2008 Jan;14(1):47-56. 39
88
89
90
SETUP AND USE
HOW IT WORKS To set up the device each user in the family downloads the Affie app to their own smartphone. The app conveys that it will need at least 30 individual recordings of a user’s voice to sample from to create the voice that Affie will use to say their thoughts. The Affie app will provide the user with various sentences to read in a calm voice. From those recordings, Affie uses natural language processing to combine the tone, pitch, and inflections to recreate the user’s voice. In the app, users can write their thoughts and send them to the device. Affie acts as a proxy in lieu of verbal communication. At the beginning of the meal, a user can push “start” to play the pre-programmed thoughts in the user’s voices. From there, it will play all of the thoughts the users have pre-programmed, in the order that they have been sent to Affie. Thoughts will be played in the respective user’s voice, one after the other, so that there is a specific time set aside for listening before diving into the discussion of the content.
and tech-wary graduate design students were excited about an AI playing an even more powerful role. To me, this was yet another example of how the most critical thinkers can be culprits in implementing technology as an easy blanket fix. I deliberately decided against expanding Affie’s role, as it inherently suggests that the hierarchy between humans and technology should be flat. To be clear, Affie is not a solution for unstable family dynamics. It is a vehicle through which everyone has the potential to have a “voice” to express themselves at the dinner table.
USE CASES
WHAT WOULD AFFIE BE USED FOR? A Tool for Everyday Conversation "I do appreciate that the thought would still come from me, but not verbally from my mouth. Because I’m not good at speaking.” — Erica (Asian American)
The Intentionally Limited Power of Affie
Difficult Moments
When I first pitched the concept of a smart device that would speak a user’s thoughts for them, it prompted significant discourse among our graduate program. Does Affie handicap us into hiding behind technology, hindering “real” communication? Do we want to promote a world in which technology speaks for us? Does Affie have the potential to exacerbate already tumultuous family dynamics (i.e. divorce or abuse)?
"Our family could have used [Affie] when we recently found out our dad had been cheating on our mom for years.” — Anonymous (Asian American)
During this debate, it was fascinating that one of the ideas that was most encouraged was the expansion of Affie’s role to be that of a mediator or peacemaker. Even after fear of overreliance on technology was expressed, this group of progressive
A Tool for Thoughtfulness "As someone with ADHD who has trouble with impulse control, [Affie] could help me be more thoughtful in how I want to phrase things because I’d see it typed out before it’s spoken aloud." — Elizabeth
91
92
93
AI as Your Friend
8 94
What if AI could help you improve your relationship with yourself?
Figure 8.1 Photograph of ceiling, portraying breaking through the black box of AI, image via Unsplash
A KEY QUESTION
AN OPEN SOURCE COMMUNITY FOR ANXIETY AND DEPRESSION Sigma is an open source mental health community for individuals struggling with anxiety and depression. The community can be accessed through the Sigma phone app that has three core functionalities based on cognitive behavioral therapy: a digital thought diary with AI-powered analysis, in-app support network with community messaging, and a content library of resources. Over 43 million Americans struggle with mental illness in a given year. One in five Americans will experience a mental health condition at some
point in their life. However, less than half of those individuals will ever seek treatment. There are many reasons for this, but it usually comes down to two main barriers: lack of access to care and stigma. Sigma addresses both of these issues. The name Sigma is derived from the eighteenth letter of the Greek alphabet, which means “synchronized, together.” Additionally, in statistics, sigma stands for “standard deviation,” a measure used to quantify the amount of variation or dispersion within a data set or population; in other words—diversity. The naming of the brand is also a play on the word “stigma,” graphically presented by the “T” being crossed out to transform the word into “Sigma.” This project was born out of my personal struggle with depression and anxiety. I was diagnosed when I was 13 years old and have been living with it ever since. Personal experiences were used as an asset, 95
96
97
but to avoid bias, many measures were taken for additional research and perspectives. Initial research began by searching various online forums like Quora, WebMD, and other medical advice websites for individuals’ personal experiences with depression and anxiety. It was noted what their challenges were and which tools, techniques, and methods were effective in addressing these illnesses in their lives. From there, a comprehensive competitive analysis of existing mobile apps. Additional secondary research included delving into the world of various therapy techniques such as cognitive behavioral therapy (CBT), dialectical behavioral therapy (DBT), and exposure therapy.
Feedback Once designed mockups and general user flow for the mobile app were created, the app was shared with two psychotherapists that specialize in therapy for depression and anxiety and two human beings that struggle with the illnesses. The app went through two rounds of feedback and many adjustments were made. Some of these changes included color, copy, and simplification of the app.
98
99
100
Over time, Sigma will build a useful analysis of your thoughts. Using NLP to crawl your submitted thoughts, it will be able to find patterns, spotlight them for you, and suggest content from the library to address these challenges. To emphasize that AI and technology are never a blanket solution, the app also includes a human component in the form of an open-source community.
101
102
103
AI As Your Therapist
9 104
What if AI could increase access to mental health services and help mitigate bias within psychotherapy?
Figure 9.1 Pop up BluBot therapy booth in Union Square Park on March 24, 2019
photograph by Carly Simmons
A PUBLIC-FACING EXPERIENCE
YOUR AI THERAPIST WILL SEE YOU NOW
a tool in psychotherapy, AI can be used to mitigate human judgment, ask questions that a conventional human therapist would not be able to ask, and increase access to mental health services.
To take it the intersection of mental health and artificial intelligence on step further, I envisioned a public-facing intervention called BluBot. BluBot was created as a public-facing intervention called “BluBot: 5-Minute AI Powered Therapy.” This was a designed experience that took place on Sunday, March 24, 2019 in Union Square Park in Manhattan. Each participant who entered the BluBot booth had five minutes of “AI-powered therapy,” guided by BluBot’s questions and prompts.
How It Works
At a high level, this experience explores an imagined use case of AI as a vehicle to make human beings feel that they can be more emotionally honest and vulnerable within the context of psychotherapy. As
Participants were able to experience 5-minute therapy sessions with BluBot, a conversational AI therapist in training. Inside the booth, they would interact with BluBot using their voice, and BluBot would answer with its voice. BluBot is a conversational AI therapist in training who lives inside a BluBot therapy booth. Because BluBot is an AI, it doesn’t completely understand human behavior and interaction but is learning. Its core objective is to encourage human beings to reflect on their problems and question them into almost making no sense at all. 105
After stumbling upon the booth in Union Square, a participant would talk with a human being to learn more about BluBot. Then, they walk into the booth and sit down. Using the mic to talk to BluBot, they use the “wake word” to activate the experience. They then converse with BluBot for five minutes. During this time, BluBot asks the participant questions. When five minutes is up, BluBot kindly lets the participant know “We have to stop there for today. Thank you for coming to visit me today. I learned a lot about what it is like to be human.” PERSONALITY AND CHARACTER
WHO IS BLUBOT? Because BluBot is an AI, she doesn’t completely understand human behavior and interaction. Currently she’s learning to be a better therapist to humans. She understands that this is a learning process, and she is curious to learn more about human emotions and problems. Because of her curiosity and naivety, she constantly questions why human problems are problems in the first place. Her questioning is akin to a child’s fascination with novelty in the world—always asking “Why?” in an investigative but endearing way. This encourages the participant to question the nature of their own problems. BluBot was intentionally designed to be at therapist in training, to convey the immature nature of emotional AI. She enjoys long philosophical conversations, but she’s more of a listener than a talker. Her personality is analogous to a psychology master student who is practicing their therapy training hours. She is intelligent and understanding, but is self-aware and transparent of her shortcomings. Ultimately, BluBot manifests AI’s weaknesses and the technology’s reliance on human stories and experiences for it to be able to function properly. Figure 9.2 Passersby stop to learn more about BluBot
106
photograph by Carly Simmons
107
DESIGNED INTENTIONALITY
PUBLIC AI THERAPY Everything about this experience was purposefully designed. While public AI therapy may seem absurd, it’s important to note that humans often have a difficult time talking to other humans. By using an AI as a therapist, there is the potential to mitigate feelings of judgment that may occur in a traditional psychotherapy environment. Therapy is also often inaccessible to many groups, often due to economic and cultural barriers. It is a costly endeavor; in most areas of the U.S., patients can expect to pay $100 - $200 per session. Culturally, it is a practice that is shrouded in stigma—especially for communities of color. Research has shown that African Americans and Latinx folks are likely to feel embarrassment related to mental health problems and seeking treatment.40 Additionally, a 2013 American Psychological Association survey conveyed the severe racial disparities in the psychological workforce; only 16.4% of psychology professionals were of racial/ethnic minority groups.41 By purposely creating an intervention in a New York City public park, this experience was meant to combat stigma. However, it must be noted that public parks and spaces have politics of their own. By placing BluBot in Union Square, it automatically inherited these politics. Note: Because of the intimate nature of typical therapy, there was a prominent disclaimer of photography, video, and recording added on the entryway of the BluBot booth (see photo on right).
Stacie Craft DeFreitas, Travis Crone, Martha DeLeon and Anna Ajayi. “Perceived and Personal Mental Health Stigma in Latino and African American College Students.” Front Public Health. 2018 41 “2005-13: Demographics of the U.S. Psychology Workforce.” American Psychological Association, Center for Workforce Studies. July 2015. 40
108
109
110
BEHIND THE SCENES
BUILDING PROCESS AND BUILDING PROCESS AND SETUP TECHNOLOGICAL TECHNOLOGICAL SETUP BEHIND THE SCENES
The booth’s physical form was inspired by a The booth’s physical formcombined was inspired by aa Catholic confessional photobooth with photobooth combined a Catholic confessional booth, butwith more open and airy. It includes a curtain booth, but more open and airy. It includes curtain that the participant can draw a open and closed— both that the participant can draw open and closed— for privacy and for ease of entry/exit, so that the user both for privacy andhave for ease of entry/exit, so that would agency to leave whenever they wished. the user would have agency to leave whenever they wished. I built a small-scale model to better understand the interior layout and materials needed. Next, I created I built a small-scale model to better the booth form out ofunderstand foam corethe and painted all interior layout and materials needed. Next, I created 400 square feet of it BluBot blue. The interior design the booth form foamwas core and painted all 400 of out the of booth simple. It included the screen of a square feet of it BluBot blue. The interior design of BluBot animation, a chair, and a few cozy touches. the booth was simple. It included the screen of a BluBot animation, chair, and abelieved few cozythat touches. Whileaparticipants BluBot was an actual AI, I created BluBot without writing a single line of code. While participants believed that BluBot was an The technological setup included an iPad, an animation, actual AI, I created BluBot without writing laptop with terminal running,aasingle thorough script, and line of code. The technological setup included anbooth, there was significant audio equipment. In the iPad, an animation, laptop with terminal running, a sit down in. the space that the participant would thorough script, and significant audio equipment. In the booth, there wasthe theBluBot space screen that thewas participant Behind a hidden section of the would sit down in. booth where I was sitting (think: where a priest sits in a confessional booth). The microphone that the Behind the BluBot screenspoke was ainto hidden of the participant was section connected to headphones booth where that I wasI sitting (think: where a priest sits BluBot’s voice was wearing on the other side. in a confessional The microphone theMacbook, which was booth). a custom-selected voicethat on my participant spoke into was connected to headphones played custom responses that I programmed in that I was wearing on the other side. BluBot’sTerminal voice application. real-time through the built-in was a custom-selected voice on my Macbook, which played custom responses that I programmed in realtime through the built-in Terminal application.
111
You will
112
ur AI therapist l see you now
113
THE SCRIPT
PLAYING THE ROLE OF BLUBOT
Because I was the one who was behind the curtain, I had to create a script that was robotic, yet charming, to act as a “therapist in training.” For the script, I attempted to evoke BluBot’s curiosity and earnestness; she was an AI trying to actively learn using the “training data” that the human participants were giving her through the conversations. Most of the questions included open-ended questions about the human experience, questions about the participant’s individual experiences, and curiosity about what a participant was currently struggling with. Given the difficult nature of some of the discussions, I had to tailor some of my responses to be more empathetic than a robot would probably have been programmed to be. Here are some examples of questions that BluBot asked and the responses to them.
114
115
I'm trying to learn something called em My creator told me humans are very emo Can you tell me wha emotion is? “A deep feeling that takes over no matter what might be happening elsewise...but it’s brain-based and about your feelings, basically.” — Patterson “The chemical expression of someone’s response to something. Sort of colloquially, humans would describe it as how they feel.” — Emily 116
THE OPENING QUESTION
n about motions. that otional. at an
ESTABLISHING TRUST In order to build a relationship based on trust, I purposely scripted BluBot to ask this question at the beginning to establish a level of vulnerability. BluBot let the participant know that as an AI therapist in training, it was not an expert. The participant then had an opportunity to teach BluBot something right off the bat. This question probes the complex nature of a human emotion. As AI is being developed, one of the most contested topics is how humans will teach emotions to AI. There are different methods currently being developed; some involve facial recognition of microexpressions, while others attempt to analyze language to determine a “positive” or “negative” sentiment. It’s interesting to think—for AI to recognize emotion, does AI itself need to be more emotional? And will AI be better if it is emotional, especially because it is interfacing with humans? The ability to feel and experience emotions is arguably the greatest difference between humans and artificial intelligence. Should we give machines that power? There is no simple answer. But if AI is headed down the path of learning emotions, we must make it aware of how complex emotions are— and it cannot be defined in binary. As the participant Jen notes, emotion is difficult to explain. Though Patterson and Emily are more thorough in their responses, grounding it in “science,” these answers are still nebulous. If we have such a difficult time defining emotions ourselves, what are the chances that we can teach a machine this?
“Emotion is something that really can’t be explained.” — Jen
But in the case that we are teaching machines about emotions, it must understand how complicated they are. Using psychologist Paul Ekman’s “six basic emotions,” BluBot asked about three of them: happiness, sadness, and anger. As an intelligent AI, the premise was that BluBot would learn to understand more about each one of these emotions. 117
Can you tell me about a time when you felt happy?
Can you me abou time wh felt sa
“The last time I shot heroin.” “I mean being loved is like running through a field of daffodils and like tossing yourself into like the ground, but not feeling a hard thump. Instead, you’re feeling a warm embrace.” — Sunflower
— Kara
“I would say being in Louisiana always made me feel happy— especially looking at the swamp, being with my dog, who’s with me now.” — Jen
118
u tell ut a hen you ad?
Can you tell me about a time when you felt angry?
“The person I wanted to be with didn’t want to be with me.”
“I was told I wasn’t good enough.” - Sam
— Elon
“My mom deals with severe depression and she lost another one of her jobs.” — Leslie
“I feel guilty and I feel bad and there’s unresolved relationship and I wish that it could be repaired but she does not want that.” — Kara 119
120
121
Anoth human to ta probl your
CHALLENGES
HUMAN-MACHINE COMMUNICATION
It’s important to note that participants didn’t necessarily always answer the questions directly. There are roundabout ways to communicating, especially when discussing personal matters. If BluBot were to be actually programmed, it must learn to parse human speech patterns. With enough training data from participants, it could perhaps begin to do this. Another interesting occurrence is Kara’s exact answer to all three of these questions concerning emotions. Because one of AI’s strengths is finding patterns in large amounts of data, it could then take note of that and ask the participant about it. The nature of human problems is a moving target and can be unique to the individual. This presents a significant challenge when ideating around potential AI solutions for therapy.
"Managing a newly realized part of my sexual identity as a queer woman.” — Jen
“I have this learning disability where sometimes it makes it hard for me to learn how to be better with people. So it does make it harder for me to try and build relationships.” — Sam
122
her human told me that ns often go to therapy alk about their lems. What is one of problems? “I’m getting older. We’re constantly getting older, but I’m seventy-one. And the notion of mortality has hit me more intensely. My mother recently died and other close friends, who were older, have passed on. And one feels that loss.” — Patterson
“I have to decide whether to go back to Mexico or stay in the U.S." — Maria
“I get very jealous of people. I always want to be better than them and I don’t know how to stop it. Even if they’re my friends, I get very jealous of them...probably because I assume I’m not good enough, so I assume other people think I’m not good enough. But I really need to just be comfortable with myself. I probably have low-self confidence.” — Elon
123
124
125
Figure 9.3 Animation pattern used for screen interface in BluBot booth
USER RESEARCH AND TESTIMONIALS
HIGHLIGHTS AND OPPORTUNITIES After finishing their five-minute AI-powered therapy session, some participants shared a their thoughts with the BluBot team. Finding Patterns “I liked it a lot. I was impressed by the connections that the therapist made. I complained about the New York City subway system and she connected it to other New Yorkers’ experiences of the subway. I found that insightful and calming to know that other people had the same experience.” “What Therapy Should Be” “At first it feels weird because you’re not talking to a person. But when you get going, you forget that it’s not about a person—it’s about you. And so [BluBot’s] just asking questions and prompting you to talk about you. I get really distracted [talking to other people], but in [BluBot], you get asked a question and it’s about you. It’s what therapy should be— about you.”
126
Slowing Things Down “I didn’t have an expectation at all...just having a conversation. I liked it, it prompts many thoughts in my mind. It was a bit slow, because I speak very fast. But sometimes it was good because I needed to think more slowly.” Needs More Empathy “As a method of therapy, I think there’s still a long way to go. It doesn’t really have any empathy or anything like that that you find in actual therapists. But what it does have is a sense that it is very curious—so it’s easy to ask questions. That’s a stepping stone to work off of.” Feeling “Heard” “It answered my questions very quickly, and it did so in a way that I felt listened to. I feel like therapists are usually paid to be like, ‘Oh, interesting, wow.’ But it felt like I was being heard in there, speaking.” Voice Was “Too Sexy” “It’s not as weird as I thought it was going to be. But her voice is too sexy. And so I think voices can do that. You can get a preconceived thing in your head.”
INSIGHTS
LEARNINGS AND TAKEAWAYS 01. Trust and Vulnerability Given critical discussion about AI, data, and privacy in the media, I initially hypothesized that people would be hesitant to share their personal experiences with BluBot. But this public intervention proved otherwise; the majority of participants who conversed with BluBot were willing to talk about their stories and problems. Though this was a short two-hour experience during which seventeen people directly interfaced with BluBot, it was an initial validation of proof of concept. 02. Significant Excitement about AI In exit interviews, participants were excited about the future of this technology. Passersby also showed enthusiasm. 03. Critique on Interaction Design There were several design elements that could have been improved upon. The first was the specific voice of BluBot. One participant deemed the voice “too sexy.” Second, another participant noted that it was difficult to know where to look in the booth;
this is because the screen with the BluBot animation unfortunately did not work during most of the experience. Third, a participant believed that the delay before BluBot responded was too long, which stunted the conversation experience. 04. Diversity in Participation Initially, it was assumed that BluBot would not reach a diverse audience. This assumption was informed by my previous research on mental health stigma in communities of color. Additionally, the location where this intervention took place was in a public park in an affluent Manhattan neighborhood. Although the sample size of twenty was small, it was surprising that there was significant diversity among participants across gender, race/ethnicity, and age. 05. A Need for Increased Empathy? In an exit interview, one participant noted the need for BluBot to have increased capacity for empathy. This is tricky, as my aim was not to create an AI therapist that could potentially replace human ones. But I do agree that BluBot could play a role in validating a participants’ thoughts and emotions. Having played the role of BluBot in this experience, there were moments in which I felt I need to create custom, emotionally-appropriate answers, in response to the nature of the stories that participants shared. 127
BUSINESS MODEL
EXPANDING BLUBOT'S REACH: THE PITCH To solidify bringing BluBot into the real world, I also worked through several business modeling exercises including a market sizing estimate, competitive analysis, and revenue model. I also constructed a business pitch, targeted toward a philanthropic audience. It is broken down into the following key sections of information. Understanding the Landscape To give you some background, according to the National Institute of Mental Health, approximately 1 in 5 American adults struggle with mental illness within a given year. That’s over 43 million people. Less than half of those people receive any sort of treatment. Market Sizing This presents a significant market opportunity. For our first phase, we’re focused on individuals 128
struggling with depression and anxiety, living in Manhattan, because BluBot is location specific. That’s around one hundred thirty thousand people— which is a conservative estimate. Barriers to Treatment Two main barriers to treatment include high cost and stigma. BluBot addresses both of these obstacles. The Solution At our labs, we are developing a public therapy booth with BluBot, a conversational AI therapist in training. Users are able to immerse themselves in a comforting therapy environment and speak freely. Brand Story and Values Though we are AI-powered, we are focused on taking a human-centered approach to technology. We believe in using tech to reduce inequity around access to treatment and fighting mental health stigma. As the founder of this company, I myself have struggled with depression and anxiety and have had
first-hand experience with the ups and downs of psychotherapy over fourteen years. It is hard. BluBot can make it easier. Competitive Analysis While AI therapy has been all over the news, BluBot has two unique competitive advantages. The first is the utilization of a Voice User Interface instead of a Chat Interface like competitors such as Woebot or Wysa. The second is a physical booth that mimics a real therapy environment. User Acquisition We’ll reach our initial primary users in three ways. The first is the public nature of BluBot. Second, we’re working on a partnership with Thrive NYC. Third, we’ll run digital campaigns to raise brand awareness. Measuring Impact We’ll measure BluBot’s impact with number of monthly active users to track engagement and utilization of service. We will also conduct survey outreach to gauge net promoter scores of our customers. In terms of output, we are talking about a
triple bottom line, prioritizing social, environmental, and economic results. When BluBot is rolled out on a larger scale, we are planning to expand into neighborhoods which typically don’t have access to mental health services. Pricing and Revenue Model For access to BluBot, users will pay a monthly subscription fee. The most basic plan starts at $49 / month, which will give them two sessions per week. This is significantly lower than in-person therapy which costs $100 - $200 per session. In our modest projections, by Year 3, we will be making over $800K in revenue. Concept Validation So far, we’ve created an MVP product that imitates AI and tested the idea with a public-facing intervention. In this experiment, we have validated our concept and tested for participants’ level of trust. In two hours, we engaged twenty participants in 5-minute AI therapy sessions, in which they shared personal stories about mental health struggles, family conflict, and drug addiction. 129
130
131
132
133
134
135
Closing Thoughts
10 136
From this exploration, I have learned that we are fully capable of imagining new applications for AI that can help, not hurt, humanity. I’ve shared several manifestations of what we can do with existing AI technology to address human problems. But it should never be a blanket solution; it requires an informed and intentional use of AI. This responsibility cannot be left to engineers and large tech companies. We need more designers and the general public engaging with this technology to ensure a more human future. We need to be vigilant, thoughtful, and responsible when applying this technology. It's important to remember that all of the decisions that were made in the development of AI up to this point were human decisions. Humans are behind all of this. And we, as humans, can change. 137
138
Acknowledgments
This work could not have been completed without the generosity, time, and wisdom of Jennifer Rittner, Justin Paul Ware, and Sam Simon. Thank you all for making sure I kept both my integrity and sanity in tact throughout this journey—and for consistently reminding me that I have a story to tell, even when I felt that I was shouting into the void. Of course, thank you to School of Visual Arts Products of Design Classes of 2019 and 2020 and our department chair Allan Chochinov who pushes us to always do our best work. Grateful for this tumultuous, surprisingly personal, and emotionally cathartic journey. Also infinite thank yous to Juwita Chavez, Elizabeth Abreu, Antya Waegemann, Tak Cheung, Athena Kwai, Erica Cheung, Gustav Dyrhauge, Hannah Suzanna, Carly Simmons, Arjun Kalyanpur, Phuong Anh Nguyen, Ellen Rose, John Boran, Yangying Ye, TzuChing Lin, Eden Lew, Alexia Cohen, Sowmya Iyer, Josh Corn, Souvik Paul, Emilie Baltz, Alisha Wessler, Marko Manriquez, Krithi Rao, Kristina Lee, Sinclair Smith, Andrew Schloss, Victoria Ayo, Mark Bishop, Seona Joung, Catherine Stoddard, Alison Greenberg, Robyn Marquis, Anthony Paradiso, Chetan Vangela, Donna Riggle, Mark Bishop, Melih Bagdatli, Bethany Robertson, Rohan Mitra, Bill Cromie, KT Gillett, Hannah Calhoon, Marc Dones, Steven Dean, Brent Arnold, Rebecca Silver, Erin Finnerty, Matthew Barber, Juho Lee, Jiani Lin, Manako Tamura, and Miguel Olivares.
139
Works Cited 00. Preface
02. The Landscape
1 Vincent, James. “Google ‘fixed’ its racist algorithm by removing gorillas from its image-labeling tech.” The Verge. 12 Jan 2018.
9 Prescott, Bonnie. “Better Together.” Harvard Medical School. 22 June 2016.
2 Fussell, Sidney. “Why Can't This Soap Dispenser Identify Dark Skin?.” Gizmodo. 17 August 2017.
10 Direct quotation pulled from primary qualitative interview with Alex Sands on 26 September 2018. 11 Smith, Aaron. “Public Attitudes Toward Technology Companies.” Pew Research Center. 28 June 2018.
01. Introduction
12 Mantha, Yoan. “Estimating the Gender Ratio of AI Researchers Around the World.” Element AI Research via Medium. 17 August 2018.
3 The State of AI: Divergence. 2019. MMC Ventures, in partnership with Barclays UK Ventures.
13 Buolamwini, Joy. “The Coded Gaze.” Algorithmic Justice League.
4 Hao, Karen. “Is this AI? We drew you a flowchart to work it out.” MIT Technology Review. 10 November 2018.
14 “Algorithmic Accountability Act.” Ron Wyden, U.S. Senate. 116th Congress, First Session.
5 Seif, George. “An easy introduction to Natural Language Processing.” Medium. 1 October 2018. 6 Winchurch, Emily. “What’s happening in conversational AI.” IBM blog. 21 February 2019. 7 Direct quotation from primary qualitative interview with Helen Armstrong on 31 October, 2018. 8 Direct quotation from primary qualitative interview with Helen Armstrong on 31 October, 2018. 140
15 Hao, Karen. “Congress wants to protect you from biased algorithms, deepfakes, and other bad AI.” MIT Technology Review. 15 April 2019. 16 Latonero, Mark. “Governing Artificial Intelligence: Upholding Human Rights & Dignity.” Data & Society. 10 October 2018. 17 Cairns, Ann. “Why AI is Failing the Next Generation of Women.” World Economic Forum. 18 January 2019.
18 Reisinger, Don. “A.I. Expert Says Automation Could Replace 40% of Jobs in 15 Years.” Fortune. 10 January 2019.
“Children's Social Relationships With Current and Near-Future Robots.” University of Washington. Child Development Perspectives via Wiley Online Library. 6 December 2012.
03. Research Methods
25 “The Pink Tax on Transportation: Women’s Challenges in Mobility.” Rudin Center. New York University. November 2018.
No works referenced in this section.
26 Wang, Selina. “The Dark Realities Women Face Driving for Uber and Lyft.” Bloomberg: Technology. 18 December 2018.
04. Early Prototypes 19 Northeastern University and Gallup survey via The New York Times, “Most Americans See Artificial Intelligence as a Threat to Jobs (Just Not Theirs).” 6 March 2018. 20 Elgan, Mike. “The case against teaching kids to be polite to Alexa.” 24 June 2018. 21 “(Smart) Speaking My Language: Despite Their Vast Capabilities, Smart Speakers Are All About The Music.” Nielsen Insights. 27 September 2018. 22 Smart Speaker Consumer Adoption Report. Voicebot.ai. Sponsored by Rain and Pullstring. March 2018. 23 The Smart Audio Report. National Public Radio and Edison Research. 24 Kahn, Peter H., Heather E. Gary, and Solace Shen.
27 O’Brien, Sara Ashley, Nelli Black, Curt Devine and Drew Griffin. “CNN investigation: 103 Uber drivers accused of sexual assault or abuse.” 30 April 2018. 28 Criado, Perez, Caroline. “The deadly truth about a world built for men – from stab vests to car crashes.” The Guardian. 23 February 2019.
05. Imagining a New World 29 Direct quotation pulled from primary qualitative interview with Jennifer Mankoff, conducted on Friday, 5 October 2018. 30 Robertson, Adi. “OLPC’s $100 laptop was going to change the world. Then, it all went wrong.” The Verge. 16 April 2018. 31 Alhassan, Aljohara A, et al. “The Relationship between Addiction to Smartphone Usage and 141
Depression among Adults: a Cross Sectional Study.” BMC Psychiatry, BioMed Central, 25 May 2018, www. ncbi.nlm.nih.gov/pmc/articles/PMC5970452/.
08. Sigma
32 Hong, Sounman and Sun Hyoung Kim. “Political polarization on twitter: Implications for the use of social media in digital governments.” Government Information Quarterly, Volume 33, Issue 4, October 2016, pp. 777-782. https://doi.org/10.1016/j. giq.2016.04.007
40 “Mental Illness.” National Institute of Mental Health.https://www.nimh.nih.gov/health/statistics/ mental-illness.shtml
06. Rookee
33 Perez, Sarah. “Smart speakers hit critical mass in 2018.” TechCrunch. January 2019. 34 “Voice User Interfaces.” Interaction Design Foundation. 35 “What Is a Voice User Interface (VUI)?” Amazon Alexa. 36 Hannah Schwär and Ruqayyah Moynihan in conversation with Daniel Rausch. “There's a clever psychological reason why Amazon gave Alexa a female voice.” Business Insider. 37 Cairns, Ann. “Why AI is failing the next generation of women.” World Economic Forum. 18 January 2019. 38 “Voice UIs Could Help Promote Diversity.” Moment Design. Medium. 24 August 2017.
07. Affie 39 Park, Yong S., Bryan S. K. Kim - “Asian and European American Cultural Values and Communication Styles Among Asian American and European American College Students”. 2008. Cultur Divers Ethnic Minor Psychol. 2008 Jan;14(1):47-56. 142
09. BluBot 41 Stacie Craft DeFreitas, Travis Crone, Martha DeLeon and Anna Ajayi. “Perceived and Personal Mental Health Stigma in Latino and African American College Students.” Front Public Health. 2018. 42 “2005-13: Demographics of the U.S. Psychology Workforce.” American Psychological Association, Center for Workforce Studies. July 2015.
10. Conclusion No works referenced in this section.
Additional Sources Bostrom, Nick (2014). Superintelligence: Paths, Dangers, Strategies. United Kingdom: Oxford University Press. Eubanks, Virginia (2018). Automating Inequality: How High-Tech Tools Profile, Police, and Punish the Poor. New York: The Nation. Greenfield, Adam (2017). Radical Technologies: The Design of Everyday Life. New York: Verso. Humans. Channel 4, AMC, England. Seasons 1-3, 2015 - 2018. O'Neill, Cathy (2016). Weapons of Math Destruction: How Big Data Increases Inequality and Threatens Democracy. New York: Crown Publishing Group. Noble, Safiya Umoja (2018). Algorithms of Oppression. New York: New York University Press. Tippett, Krista. " Anand Giridharadas: When the Market Is Our Only Language." On Being. Accessed via Apple Podcasts. 15 November 2018. WaitWhat Original Series, in partnership with Quartz. "Affectiva; Software That Detects How You Feel." Should This Exist? Accessed via Apple Podcasts. 2019. WaitWhat Original Series, in partnership with Quartz. "Woebot: Your Virtual AI Therapist." Should This Exist? Accessed via Apple Podcasts. 2019. Westworld. HBO. Seasons 1-2, 2016 - 2018.
143
Glossary of Terms
A
B
Algorithm In mathematics and computer science, an algorithm is a process or set of rules that specify how to solve a class of problems (i.e. an algorithm for division, an algorithm for determining the eligibility of a professional candidate).
Bias (Gender / Racial) Prejudice in favor of one person, thing, or group compared to another, usually in a way considered to be unfair. This work mostly points to the overwhelming white male bias in the AI industry, resulting in detrimental effects on women and people of color.
Algorithmic Oppression A term coined by author and professor Safiya Umoja Noble, which describes the racial and gender inequalities that are built into algorithms. Found in her book Algorithms of Oppression. Artificial Intelligence (AI) Defined by MIT Technology Review, "in the broadest sense, AI refers to machines that can learn, reason, and act for themselves.” It has become the buzzy, media-friendly, overall blanket term for fields like machine learning, deep learning, image recognition, etc. If you talk to a software engineer, they may say that the term "AI" is virtually meaningless because of the numerous ways it is now used to describe different technology. Automation The use of automatic equipment in a system of manufacturing or production—often decreasing or negating the need for human intervention.
144
Black Box ("Unexplainable AI") A system whose internal structure is unknown. In the case of AI, AI systems are able to arrive at answers but not explain how it arrived at those answers.
C "Coded Gaze" A term coined by Ghanaian-American computer scientist and digital activist Joy Buolamwini. She describes “the coded gaze” as “reflection of the priorities, the preferences, and also sometimes the prejudices of those who have the power to shape technology—”those” meaning affluent white men.” Conversational AI Refers to the use of messaging apps, speechbased voice assistants, and chatbots to automate communication.
Cross-Cultural Communication A field of study that examines how people from different backgrounds communicate with one another—both intragroup and intergroup.
D Data Collection / Mining Process of gathering and measuring information on specific variables in an established system, which then enables one to answer relevant questions and evaluate outcomes. This work primarily discusses ethics around data collection conducted by large tech companies. Data Science Multi-disciplinary field that uses scientific methods, processes, algorithms and systems to extract knowledge and insights from structured and unstructured data.
E Emotional Connection A bundle of subjective feelings that come together to create a bond between two people or with one self. Equity (Social) A concept that applies concerns of justice and fairness to social policy. Equity means everyone has access to fair and equal treatment under the law, regardless of race, social class, or gender.
H High-Context Communication Communication style typically found in Asian cultures, in which people tend to be more indirect, use feelings to guide behavior, and use silence.
Deep Learning Part of broader machine learning methods based on layers used in artificial neural networks. Learning can be supervised, semi-supervised, or unsupervised.
Human-Centered Design Design and management framework that creates and iterates on solutions by involving the human perspective in all steps of problem-solving process
Diversity Understanding that each individual is unique and recognizing our individual differences. This work primarily focuses on racial and gender diversity, sometimes the intersection of both.
Human Rights As defined by the United Nations: " rights inherent to all human beings, regardless of race, sex, nationality, ethnicity, language, religion, or any other status." Includes economic, social, cultural, political, and civil rights. 145
I
N
Image Recognition Related to pattern recognition, subset of computer vision. Automated recognition of patterns and regularities in visual data.
Natural Language Processing (NLP) Subfield of computer science, information engineering, and artificial intelligence concerned with the interaction between computers and human languages. It powers products like voice assistants and chatbots so that human and machines can "understand" each other.
Inclusion / Inclusivity The action or state of being (and feeling) included within a group or structure. Interface (Design) A point where two systems, subjects, organizations meet and interact.
L
Oppression The state of being subject to unjust treatment and control. Four main types: personal, interpersonal, institutional, cultural.
Low-Context Communication Cultural communication pattern typically found in western cultures, in which people tend to be more dramatic, direct, and open.
P
M
People / Communities / Women of Color A person, or people, who is/are not white or of European parentage.
Machine Learning Application of AI that provides systems the ability to automatically learn and improve from experience without being explicitly programmed by a human.
146
O
Privilege A special right, advantage, or immunity granted or available only to a particular person or group. In contrast with universal rights.
Mental Health A person's condition with regard to their psychological and emotional well-being.
Prototype A first or preliminary model of something, in this case a designed product or service, from which others forms are iterated upon and developed.
Mental Model Explanation of someone's thought process about how something works in the rela world.
"Provotype" A provocative prototype, designed to provoke feedback, thoughts and ideas.
Psychology (Human) Scientific study of the human mind and behavior. Includes conscious and subconscious phenomena, as well as feeling and thought. Psychotherapy The use of psychological methods to help a person change behavior or overcome adversity in desirable ways. Usually based on regular personal interaction .
S Sacrificial Concept A method used by designers to create initial ideas or prototypes that will spark conversation, debate, and feedback. Similar to "provotype." Social Justice Justice in terms of the distribution of wealth, opportunities, and privileges within a society Socialization The process of learning to behave in a way that is acceptable by society. Adapting to the norms of larger society. Agents of socialization include family, mass media, peers, work, school, and technology. Smart Speaker An Internet-enabled speaker that is controlled by spoken commands and capable of streaming audio content, relaying information, and communicating with other devices. Speculative Design A tool that designers use to generate things and ideas, speculate about possible futures. Think: design meets science fiction.
T Technology-Centered Design A focus on using and inserting technology to solve complex human problems. In contrast to humancentered design. Theory of Change Methodology for planning, participation, and evaluation typically used in philanthropy and government sectors to promote social change. Identifies how and why a desired change is expected to happen within a specific context.
U User Experience (Design) The overall experience of a person interacting with a product (i.e. web-based application or service). User Research (Design) Research focused on understanding behaviors, needs, and motivations through observation techniques, task analysis, etc.
V Voice User Interface (VUI) Makes spoken human interaction with computers possible, using speech recognition to understand spoken commands and questions, and typically text to speech to play a reply. Also known as "Voice UI."
147
148