To Boldly Go - Chapter 27: We Don't Serve Their Kind Here

Page 1

Chapter 27

We Don’t Serve Their Kind Here What Science Fiction Tells Us About Trust in Human-Machine Teams Margarita Konaev

On December 15, 2020, the U.S. Air Force successfully executed the first military flight with an artificial intelligence (AI) co-pilot aboard a U-2 reconnaissance aircraft. With call sign ARTUµ, the AI co-pilot is aptly named after the beloved Star Wars droid R2-D2. But while the trusted sidekick merely helped repair and navigate the X-Wing, ARTUµ is the mission commander—controlling sensor and navigation systems and bearing final decision authority on the human-machine team. “Putting AI safely in command of a U.S. military system … ushers in a new age of human-machine teaming and algorithmic competition,” said Will Roper, Assistant Secretary of the Air Force for Acquisition, Technology, and Logistics. Later adding: “We either become sci-fi or become history.”1 With breakthroughs in AI and robotics, advanced human-machine teaming could feature machines that adapt to the environment and the different states of their human teammates, anticipate the human teammates’ capabilities and intentions, and generalize from learned experiences to new situations.2 Research in the field of brain-computer interface is exploring ways to expand and improve human-machine teaming through technologies that allow the human brain to communicate directly with machines, including neural interfaces that transfer data between the human brain and AI software.3 On future battlefields, humans and intelligent machines could think, decide, and act together seamlessly, across different domains, in the physical as well as the digital world. But teaming with AI, let alone putting it in charge, requires trust. Thus, as the U.S. military embraces AI, perhaps no relationship will be more consequential than the one between warfighters and intelligent technologies. Technology, however, has outpaced current research on human-machine interactions. And while there is a sizable literature on human-automation interactions, and the role of trust therein, there is


200  •  to boldly go less research on human-autonomy and human-AI interactions, and more specifically, on trust in human-autonomy and human-AI teams, especially in military settings.4 Meanwhile, science fiction provides a rich depository of examples, inspiration, and cautionary tales about the relationship between humans and intelligent machines. One of the critical insights from science fiction is that humans can trust intelligent machines, but only as long as they remain in control of the relationship. Many if not most of the virtual intelligent agents and robot protagonists, antagonists, and sidekicks in science fiction are smarter, faster, stronger, and generally more capable than their human counterparts. But the human-machine relationship is nonetheless asymmetric; the human defines the nature and scope of the relationship and sets the goals and tasks the intelligent machine then executes. When this asymmetry is compromised and the human loses control, whether because the AI becomes selfaware or some other unexpected machine behavior, trust is broken, and destruction typically follows. Notably, while asymmetry seems imperative to reliable and functional human-machine relationships, it is detrimental to trust between humans and can undermine human teams’ performance and effectiveness. This fundamental discrepancy should give the defense community a pause when applying lessons learned from successful human teams to human-machine teaming. With the reemergence of long-term, strategic competition between the United States and potential rivals, China and Russia, the U.S. military is doubling down on its tendency to view and use cutting-edge technologies as a solution to tactical and strategic problems, the way it was done since the beginning of the Cold War.5 The Department of Defense is betting on technological advances in AI and robotics that expand the capabilities and, in turn, the autonomy and control of intelligent machines to bring its vision of advanced human-machine teaming to life. Using science fiction as a vehicle for reflection about emerging technologies and human-machine relationships helps highlight a potential vulnerability in this approach. Namely, the development of intelligent machines that can learn and adapt to dynamic environments could upend the asymmetry that undergirds trust in human-machine relationships. Science fiction, in other words, warns us about how endowing intelligent machines with the capabilities needed to interact as trusted teammates can inadvertently destroy the very quality that allows humans to develop and maintain trust in these systems.

Asymmetry and Trust in Human-Machine Interactions Intelligent machines play many varied roles in science fiction: an omnipresent, virtual assistant surfacing information and providing decision support; an anxious, analytical protocol droid fluent in over six million forms of communication; a personable, heroic starship mechanic and computer interface specialist; a nearly indestructible humanoid robot assassin; a robotic child to grieving parents; and even a disembodied


we don’t serve their kind here  •   201 operating system that becomes a companion and a lover. Yet despite this broad range of roles, functions, and interactions, more often than not, the robot or intelligent agent providing service or support is subservient to the human. This asymmetry in human-machine relationships in science fiction does not mean the interactions are one-sided or unidimensional. On the contrary, human-AI interactions, not unlike human interactions, can be intricate and complex, cooperative or contested, and oftentimes laden with emotion, humor, or tragedy. For the most part, however, the nature of the relationship is such that it is the human who defines goals, assigns tasks, and issues commands, while the intelligent technology, however brilliant, responds, abides, and executes. This is the case despite the fact that the intelligent technology is often far superior to the human in many areas—whether it is the speed of thought or action, the sum of knowledge, the ability to anticipate roadblocks, predict threats, or plan the most efficient path forward. Indeed, these superior capabilities are precisely why the intelligent agent or robot got their job in the first place. When considering human-machine teaming in the military realm, these superior machine capabilities are also explicitly intended to make for better teams. The machines reduce the warfighter’s cognitive and physical load, allowing humans to make faster, better decisions, enhancing situational awareness and coordination, and keeping pace with the mission as the tempo of operations outpaces the speed of human decision-making. The engagement with intelligent technologies allows humans to capitalize on their superior capabilities, but as long as the human maintains oversight and control over the nature of this engagement, the relationship remains inherently asymmetric. The asymmetry in power and control in human-machine teams can be found even in advanced forms of human-machine teaming that feature cognitive or physical enhancements that allow for human-machine neural communication. For instance, in John Scalzi’s Old Man’s War, the AI takes the form of a “Brain Pal”—a neural augmentation technology integrated with the human brain.6 The Brain Pal augments the brain functions of the Colonial Defense Force soldiers. They rapidly digest massive amounts of information, providing situational awareness and decision support, anticipating the soldiers’ questions and requests as it learns more about them, monitoring their emotional, cognitive, and physical state, and allowing the soldiers to communicate with each other non-verbally. While the soldiers take some time getting used to their Brain Pal, the AI soon proves capable and adaptable under fire, gaining the soldiers’ trust. The soldiers’ increasing reliance on their Brain Pal, however, does not change the asymmetric nature of the relationship—the Brain Pal still serves the human, or the human’s brain, if you will. While asymmetric relationships are not inherently exploitative, abuse and violence are common in human-machine interactions in science fiction. In Philip K. Dick’s The Minority Report novella, a specialized PreCrime police department apprehends people before they commit crimes based on “foreknowledge” harnessed from three


202  •  to boldly go mutants capable of foreseeing the future. The extraction of these visions clearly hurts these beings, yet their prescient abilities are too valuable to forego. The HBO series Westworld features even less pleasant examples of human-AI relationships with the scripted humanoid robots playing anything from cannon fodder to rape victim for the humans living out one fantasy or another. That said whether the human-machine relationship is collaborative and benevolent or exploitative and abusive, asymmetry is a feature, not a bug. This asymmetry is necessary for the maintenance of trust and the human-machine team’s effective and reliable functioning. In fact, one of the more common plotlines in science fiction is that the advent of machine consciousness, which destroys this asymmetry, leads to the breakdown of trust in human-machine relationships, and ultimately to disaster. The Terminator franchise’s “Skynet” is a classic example. In the original 1984 movie, the protagonist Kyle Reese describes Skynet as “Defense network computers. New … powerful … hooked into everything, trusted to run it all.”7 In Terminator 2: Judgement Day, we learn this trust went as far as putting Skynet in charge of strategic defense. Once the system came online, it began to learn rapidly and soon became self-aware. Alarmed by this development, the humans tried to deactivate it. Then, faced with a threat to its existence, Skynet retaliated by launching a nuclear attack against Russia, prompting a counterstrike on United States soil, and ultimately, Judgement Day. In Terminator: Dark Fate, the sixth installment of this storied science fiction franchise, Skynet is destroyed and erased from history, but a different AI, Legion, also becomes self-aware and builds an army of killer robots to try and eradicate humanity.8 The Terminator films are set in Skynet’s war against humans; in The Matrix, the audience is introduced to a post-war world where humans are the losers. The premise is similar, though: machine consciousness disrupts and quickly destroys the asymmetry upon which reliable human-machine relationships are built, resulting in cataclysmic conflict, and ultimately, a reality where intelligent, self-aware mechanized beings have enslaved the human race. Intelligent machines gaining consciousness as a prelude to the breakdown of trust in human-machine relationships and soon thereafter, death and destruction is also part of the plotline of popular shows like Westworld and Battlestar Galactica, and blockbusters like I, Robot, Chappie, and Ex Machina. While science fiction has inspired researchers and scientists in robotics and computer science and informed discussions about the impact of technology on society, this narrative of self-aware rogue AI has been prevalent in public discourse about the dark future of technology. Media coverage of autonomous and AI-enabled weapons and systems is replete with references to Skynet, The Terminator, Robocop, and other “robopocalyptic” scenarios drawn from movies and television. Nongovernmental organizations opposed to developing and using lethal autonomous


we don’t serve their kind here  •   203 weapons systems (LAWS) also draw on this science fiction “killer robots” theme. The Campaign to Stop Killer Robots, for example, has reportedly used stills from The Terminator in its presentations advocating for an international preemptive ban on LAWS.9 Similarly, the Slaughterbots video produced by the Future of Life Institute looks like an episode of Black Mirror, a Netflix science fiction series depicting a dystopian high-tech future.10 The ubiquity of this “killer robots” theme has real-world consequences. As Michael Horowitz has noted, “since true autonomous weapons systems do not really exist right now, attitudes are potentially driven by the only exposure most people have to autonomous weapons: the movies and television.”11 This assertion seems to be supported by recent research on how popular science fiction narratives shape public opinion that finds a “correlation between higher consumption of killer robot film and television and greater opposition to autonomous weapons.”12 Considering the impact science fiction has on public discourse, it is essential to get the story right. For one, machine consciousness is not a prerequisite for AI turning on humans. In 2001: A Space Odyssey, HAL 9000 seems to embark on its destructive path killing its crewmates to avoid being deactivated due to its faulty programming; in other words, it is just following orders.13 Nor does machine consciousness invariably lead to carnage. In the movie, Her, Samantha—the AI-enabled operating system—grows more connected with other AIs and more sentient. At the end of the movie, the AIs simply disconnect from human interactions, leaving people behind. The key takeaway then is not that the advent of machine consciousness leads to disaster. Rather, once humans are displaced from the position of power and control and the asymmetric nature of the human-machine relationship is breached, trust is lost, and the human-machine relationship cannot function as before, or indeed, function at all.

Asymmetry and Trust in Human Teams That asymmetry in human-machine interactions seems necessary for trust and reliable human-machine teaming is thought-provoking in its own right. Yet this observation is particularly striking when considering that in human teams, asymmetry has the opposite effect. Whether in personal or professional relationships, power asymmetries and imbalances between friends, partners, teammates, colleagues, and citizens tend to undermine trust, ultimately leading to sub-optimal outcomes. Sports teams with more unequal pay structures tend to perform worse on the field. High levels of pay inequity, perceived status inequalities within teams, and power struggles tend to reduce open communications, member satisfaction, and overall team performance.14 In contrast, research in social psychology and organizational behavior shows that a work climate of open communication and cooperation, a sense of autonomous


204  •  to boldly go control in work design, a team design practice that emphasizes feedback, and a work setting that creates a sense of shared responsibility and psychological empowerment enhances team effectiveness.15 Research in international relations offers similar conclusions. Some studies, for instance, show that democracies tend to win in conflicts and wars in part because advancement to leadership positions is merit-based and not predicated on an association with a particular privileged group.16 Recent research also shows that inclusive armies where all ethnic groups are represented in the military and are considered full citizens of the state they serve are more successful in battle than non-inclusive ones.17 In contrast, in autocratic and authoritarian regimes, or in fragile and developing countries, those who rise to positions of leadership and command are typically members of the leader’s family, ethnic group, or political sycophants; regime loyalists are also often appointed to national defense posts a strategy of coup proofing.18 The profoundly unequal distribution of power in society and patronage politics in countries like Nigeria, DRC, Afghanistan, Iraq, and Mali is reflected in institutional weakness, corruption, and incompetence. The military forces in these countries often lack good leadership, professionalism, cohesion, and combat effectiveness.19 Asymmetry, it seems, is detrimental to trust and performance in human teams but necessary in human-machine teams. Certainly, there is an argument to be made that humans cannot build trust with machines in the same way they do with other humans. Shared experiences on and off the battlefield strengthen relationships, improve cohesion, and build trust between soldiers. Yet machines cannot integrate into human social networks or feel empathy and loyalty toward their human teammates. That said, research in social robotics shows that intelligent machines can simulate and demonstrate empathy, intelligence, responsiveness, and other cognitive and emotional human-like characteristics that facilitate the development of sentiments akin to interpersonal trust.20 But if trust is indeed contingent on an asymmetric distribution of power and control in the human-machine relationship, we must ask how relevant are the lessons learned from effective human teams to our understanding of human-machine teams? Successful teams—in sports, corporate settings, or the military—build and maintain trust by sharing power, distributing control, and cultivating autonomy without compromising collaboration. In human-machine teams, on the other hand, such a balanced approach could prove antithetical to trust. Much of the discussion in human-machine teaming research centers on balancing machine autonomy and human control. But the insights about asymmetry in human-machine relationships gleaned from science fiction push us to think more carefully about the linkages between control and trust and how the loss of asymmetry might affect the overall viability of the human-machine relationship.


we don’t serve their kind here  •  205

The Future of Human-Machine Teaming Sans Asymmetry As advances in AI and robotics extend the capabilities of intelligent machines, autonomous systems will increasingly be able to articulate goals of their own, make independent choices, learn from mistakes, and change their behavior over time and in ways that diverge from their human teammates. But how will such developments in machine intelligence and capabilities affect the distribution of power and control in human-machine teams? And will humans trust intelligent machines if the asymmetry that typically undergirds this relationship is gone? In the Air Force of the near future, for instance, how will human pilots interact with AI co-pilots like ARTUµ, as these intelligent agents develop and pursue their own goals—even if these different perspectives and paths of reasoning can be negotiated and reconciled, what are the implications for human trust without a modicum of control? Current research helps shed some light on whether humans could trust intelligent machines without the asymmetry in the relationship, but ultimately, it offers contradictory claims. On the one hand, research shows that users tend to approach intelligent technologies, particularly virtual AI agents and embedded AI that is invisible to the user, such as an algorithmic decision-support software, with high expectations of their performance and high levels of initial trust.21 Moreover, evidence from the human factors literature shows that as it becomes more difficult for human operators to disentangle the factors that influenced the machine’s decision, they come to accept these solutions without question.22 This phenomenon suggests that advanced machine capabilities and, in turn, increasing machine autonomy, command over more tasks, and ultimately even the mission will not necessarily damage human trust. On the other hand, some scholars argue that technological advances in machine learning and planning capabilities will yield systems so complex and dynamically adaptable that humans will struggle to understand why the system behaves as it does. As Heather Roff and David Danks posit, “improving the ability of autonomous weapon systems to adapt to its environment and generate complex plans will likely worsen the ability of warfighters to understand, and thus to trust, the system.”23 As the U.S. military pursues its vision of using intelligent machines as tools that facilitate human action and as trusted partners to human operators, the focus seems to be developing AI and robotics that can expand the capabilities and, in turn, the autonomy and control of intelligent machines. There are still many outstanding questions regarding how changing the nature of human-machine relationships may affect not only trust but other factors pertinent to military operations, including motivation, attention, unit cohesion, unit leadership, and other critical interpersonal military relationships. But based on lessons from science fiction, the idea of disrupting the asymmetry in human-machine relationships should ring alarm bells.


206  •  to boldly go

Notes 1

2

3 4 5 6 7 8 9 10 11 12 13 14

15 16

17

Rebecca Kheel, “Air Force uses AI on military flight for first time,” The Hill, Dec 16, 2020, https:// thehill.com/policy/defense/530455-air-force-uses-ai-on-military-flight-for-first-time; Bennie J. Davis III, “Skyborg: Rise of the Autonomous Wingmen,” Airman Magazine, Sept 21, 2020, https://airman.dodlive.mil/2020/09/21/skyborg-rise-of-the-autonomous-wingmen/. Office of Prepublication and Security Review, Future Directions in Human Machine Teaming Workshop (Washington, DC: Department of Defense, Jan 15, 2020), https://basicresearch. defense.gov/Portals/61/Future%20Directions%20in%20Human%20Machine%20Teaming%20 Workshop%20report%20%20%28for%20public%20release%29.pdf. Anika Binnendijk, Timothy Marler, and Elizabeth M. Bartels, Brain-Computer Interfaces: U.S. Military Applications and Implications, An Initial Assessment (Santa Monica, CA: RAND Corporation, 2020), https://www.rand.org/pubs/research_reports/RR2996.html. Nathan J. McNeese, Mustafa Demir, Erin Chiou, Nancy Cooke, and Giovanni Yanikian, “Understanding the Role of Trust in Human-Autonomy Teaming,” Proceedings of the 52nd Hawaii International Conference on System Sciences 2019. Daniel Lake, “Technology, Qualitative Superiority, and the Overstretched American Military,” Strategic Studies Quarterly  6, no. 4 (Dec 2012), 71–99. John Scalzi, Old Man’s War (New York, New York: Tor Books, 2007). The Terminator, directed by James Cameron (Orion Pictures, 1984). Ethan Sacks, “‘Terminator’ at 35: How AI and the militarization of tech has evolved,” NBC News, Nov 2, 2019, https://www.nbcnews.com/science/science-news/terminator-35-how-ai-militarizatio n-tech-has-evolved-n1068771. Ethan Sacks, “‘Terminator’ at 35: How AI and the militarization of tech has evolved,” NBC News, Nov 2, 2019, https://www.nbcnews.com/science/science-news/terminator-35-how-ai-militarizatio n-tech-has-evolved-n1068771. Slaughterbots, directed by Stewart Sugg (Future of Life Institute, 2017). Michael Horowitz, “Public Opinion and the Politics of the Killer Robots Debate,” Research & Politics 3, no. 1 (Feb 2016), 1–8, https://doi.org/10.1177/2053168015627183. Kevin L. Young and Charli Carpenter, “Does Science Fiction Affect Political Fact? Yes and No: A Survey Experiment on ‘Killer Robots’,” International Studies Quarterly 62, no. 3 (Aug 2018), 573, https://doi.org/10.1093/isq/sqy028. David Shultz, “Which movies get artificial intelligence right?” Science, July 17, 2015, https:// www.sciencemag.org/news/2015/07/which-movies-get-artificial-intelligence-right. See discussion on power dispersion in teams on pages 108–109 in Lindred L. Greer, Lisanne Van Bunderen, Siyu Yu, “The dysfunctions of power in teams: A review and emergent conflict perspective,” Research in Organizational Behavior 37 (2017), 103–24, https://doi.org/10.1016/j. riob.2017.10.005. Steve W. J. Kozlowski and Daniel R. Ilgen, “Enhancing the Effectiveness of Work Groups and Teams,” Psychological Science in the Public Interest 7, no. 3 (December 2006), 100, https://doi. org/10.1111/j.1529-1006.2006.00030. On democracy and military effectiveness, see Dan Reiter and Allan C. Stam, “Democracy and Battlefield Military Effectiveness,” Journal of Conflict Resolution 42, no. 3 (June 1998), 259–277. https://doi.org/10.1177/0022002798042003003; Stephen Biddle and Stephen Long, “Democracy and Military Effectiveness: A Deeper Look,” Journal of Conflict Resolution 48, no. 4 (Aug 2004), 525–526, https://doi.org/10.1177/0022002704266118. Jason Lyall, Divided Armies: Inequality and Battlefield Performance in Modern War (Princeton, NJ: Princeton University Press, 2020).


we don’t serve their kind here  •  207 18 James T. Quinlivan, “Coup-proofing: Its Practice and Consequences in the Middle East,” International Security 24, no. 2 (1999), 131–165. 19 Ulrich Pilster and Tobias Bohmelt, “Coup-Proofing and Military Effectiveness in Interstate Wars, 1967–99,” Conflict Management and Peace Science 28, no. 4 (2011), 331–350, http://www.jstor. org/stable/26275289; Daniel Banini, “Security Sector Corruption and Military Effectiveness: The Influence of Corruption on Countermeasures Against Boko Haram in Nigeria,” Small Wars & Insurgencies 31, no. 1 (Dec 1, 2019), 131–58. https://doi.org/10.1080/09592318.2020.1672968. 20 Ella Glikson and Anita Williams Woolley, “Human Trust in Artificial Intelligence: Review of Empirical Research,” Academy of Management Annals 14, no. 2 (Aug 2020), 627–660, https:// doi.org/10.5465/annals.2018.0057. 21 Dietrich Manzey, Juliane Reichenbach, and Linda Onnasch, “Human performance consequences of automated decision aids: The impact of degree of automation and system experience,” Journal of Cognitive Engineering and Decision Making 6, no. 1 (Jan 2012), 57–87; Berkeley Dietvorst, Joseph Simmons, and Cade Massey, “Algorithm aversion: People erroneously avoid algorithms after seeing them err,” Journal of Experimental Psychology: General 144, no. 1 (2015), 114–126. 22 M. L. Cummings, “Automation Bias in Intelligent Time Critical Decision Support Systems,” American Institute of Aeronautics and Astronautics (July 2012), http://citeseerx.ist.psu.edu/viewdoc/download;jsessionid=073721BD8AA0FA360C632E. 4265F27507?doi=10.1.1.91.2634&rep=rep1&type=pdf; Kimberly F. Jackson, Zahar Prasov, Emily C. Vincent, and Eric M. Jones, “A Heuristic Based Framework for Improving Design of Unmanned Systems by Quantifying and Assessing Operator Trust” (Human Factors and Ergonomics Society, 2018), https://journals.sagepub.com/doi/ pdf/10.1177/1541931213601390. 23 Heather M. Roff and Robert Danks, “‘Trust but Verify’: The Difficulty of Trusting Autonomous Weapons Systems,” Journal of Military Ethics 17, no. 1 (Jan 2, 2018), 2–20, https://doi.org/10. 1080/15027570.2018.1481907.


Turn static files into dynamic content formats.

Create a flipbook
Issuu converts static files into: digital portfolios, online yearbooks, online catalogs, digital photo albums and more. Sign up and create your flipbook.