SISR Artificial Intelligence - Winter 2019

Page 1

Winter 2019 | University of Chicago A Production of The Triple Helix

The Science in Society Review


THE TRIPLE HELIX A global forum for science in society

Work with tomorrow’s leaders Our international operations unite talented undergraduates with a drive for excellence at over 25 top universities around the world. Imagine your readership Bring fresh perspectives and your own analysis to our academic journal, The Science in Society Review, which publishes International Features across all of our chapters. Reach our global audience Our print journals and online blog showcase the latest in scientific breakthroughs and policy developments through editorials, brief reportings, and original research. Catalyze change and shape the future Our publications and events engage students, faculty, public leaders, and the community in discussion and debate about the most pressing and complex issues that our world is facing today. All of the students involved in The Triple Helix understand that the fast pace of scientific innovation only further underscores the importance of examining the ethical, economic, social, and legal implications of new ideas and technologies—only then can we completely understand how they will change our everyday lives, and perhaps even the norms of our society. Come join us!

TRIPLE HELIX CHAPTERS North America Chapters Arizona State University Brown University Carnegie Mellon University Cornell University Georgia Institute of Technology George Washington University Georgetown University The Harker School Harvard University Johns Hopkins University The Ohio State University University of California, Berkeley University of California, Davis University of California, San Diego University of Chicago Yale University Europe Chapters Cambridge University Aristotle University Asia Chapter National University of Singapore Australia Chapter University of Melbourne


TABLE OF CONTENTS ARTIFICIAL INTELLIGENCE 101

6

Maggie Bader....................................................................................................

INQUIRY: BILL HUTCHISON

10

A.I. IN REPORTING

14

INQUIRY: DR. JAMES EVANS

17

SORRY, WEBMD

20

INQUIRY: DR. MARC DOWNIE

23

THE QUANTUM MYSTIQUE

26

INQUIRY: RAYID GHANI

31

THE FUTURE OF WARFARE: POLICY

34

THE FUTURE OF WARFARE: THREAT

36

INQUIRY: DR. NICOLETTE BRUNER

39

ARTIFICIAL MORALITY

43

Margot Carlson............................................................................................ Alexa Perlmutter.......................................................................................... Neha Lingareddy......................................................................................... Ellie L. Frank................................................................................................

Charlotte Soehner........................................................................................ Rory Frydman..............................................................................................

Yueran Qi...................................................................................................... Charlotte Pierce Scott..................................................................................

Joshua O'Neil...............................................................................................

Annabella Archacki.....................................................................................

Benjamin Lyo...............................................................................................


STAFF AT UCHICAGO President Nila Ray Vice President Edward Zhou Editors-in-Chief Elizabeth Crowdus Rachel Gleyzer Managing Editors Sydney Jenkins Abby Weymouth Sharon Zeng Associate Editors Praveen Balaji Katherine Boggs Ruichen Christine Cao Jordan Cooper Anya Dunaif Karen Ji Caroline Kim Jui Malwankar Linus Park Thalmilini Pathmarajah Nivedina Sarma Phoebe Seltzer Jessica Xia Production Staff Ariel Goldszmidt Ariel Pan

Message from Chapter Leadership Dear Reader, It is with great excitement that we bring to you the 2019 Winter Issue of The Science in Society Review. A new year has introduced new directions to consider as some of the most pressing scientific issues and newest innovations are on the rise in society. Here at The Triple Helix, we understand the need to investigate these questions in an integrative manner. In this vein, our writers, aided by a strong support system of undergraduate editors and the executive board team, strive to incorporate the perspectives of multiple fields in their articles. The Triple Helix at UChicago continues to proudly uphold our mission of exploring the interdisciplinary nature of the sciences and how they shape our world through the work we present to you. We are honored to encourage our future leaders in their rigorous exploration of the key challenges in society today. It is our hope that the articles presented herein will stimulate and challenge you to join our dialogue. And so, I leave you with this: How do you see science in society? Nila Ray President, The Triple Helix UChicago uchicago.president@thetriplehelix.org

4

THE TRIPLE HELIX Winter 2019

Š 2019, The Triple Helix, Inc. All rights reserved.


Message from the Editors Dear Reader, Artificial intelligence, or A.I., a human-made technology with the potential to mimic and exceed human cerebral capacity, provides a rich landscape on which to recontextualize age-old philosophical questions about life. French philosopher and mathematician René Descartes famously proclaimed, “I think, therefore I am.” If an A.I. can think, does that mean it exists in the same way that we humans do? This complication necessitates philosophical action, whether it means redefining life or maintaining the current definition and accepting A.I.s into the fold of living beings. From the morality of sentient robots to quantum physics and free will, inherent in this volume is an investigation of what A.I. can teach humans about our place in the world. Sincerely, Elizabeth Crowdus and Rachel Gleyzer Editors-in-Chief, The Science in Society Review uchicago.print@thetriplehelix.org

© 2019, The Triple Helix, Inc. All rights reserved.

THE TRIPLE HELIX Winter 2019

5


ARTIFICIAL INTELLIGENCE 101 The Past, Present, and Future of A.I.

A

Maggie Bader

rtificial intelligence, or A.I., is a phrase that many in industrial society have

encountered, yet few can define. For some, the pursuit of artificial intelligence is nothing more than “summoning the demon.”1 A.I. stirs optimism in some, with one journalist claiming, “we will be to robots what dogs are to humans, and I’m rooting for the robots.”1 Like any scientific field, A.I. carries a complicated history, encompasses a broad range of forms and functions, and stirs up ethical dilemmas as far-reaching as its potential. A.I. is responsible for recommending future purchases online, understanding speech through virtual assistants like Apple’s Siri, recognizing the contents of a photograph, and detecting credit card fraud. Given the increasingly central role of A.I. in a wide range of fields, students preparing to enter areas of work beyond computer science may nonetheless encounter A.I. in their future careers. As a result, even those who have never written a single line of code could benefit from learning more about A.I. What is A.I.? A.I. denotes computer systems capable of performing tasks that typically fall within the domain of human intelligence, such as visual perception, speech recognition, and decision-making. Research in this field is vast and varied, but it can generally be reduced to pinpointing a task with which A.I. could aide humans, coming up with an A.I. system to do so, and then working to implement such a system where it is needed. The two main conceptual branches of A.I. are narrow and general. Narrow A.I.— systems that have learned to carry out a certain task without anyone programming them to do so—comprises the A.I. of today. General A.I. does not yet exist, though one can see the concept portrayed in movies like The Terminator or 2001: A Space Odyssey. This type of A.I. imitates the adaptable intelligence of humans, capable of learning a wide array of tasks. A.I. experts are divided over the question of whether general A.I. will become reality and if so, when. A 2013 survey conducted among four groups of experts reported a 50% chance of creating a general A.I. by 2050, and a 95% chance by 2075.2 This survey further predicted that “superintelligence,” or an intelligence well beyond human capacity, could occur thirty years after general A.I. Nonetheless, other A.I. experts criticize these predictions as far too hopeful, especially given the limited understanding of the human brain at present. Pessimists predict that general A.I. remains centuries away. One crucial component of A.I.—evolutionary computation—involves using an A.I. to develop another A.I. Researchers use A.I. to test various combinations of sequences in order to determine the optimal way to solve a problem, and they incorporate this optimal method into the code of the A.I. they are developing. 6

THE TRIPLE HELIX Winter 2019

© 2019, The Triple Helix, Inc. All rights reserved.


A second area of A.I. is that of machine learning. Machine learning underlies the biggest breakthroughs for A.I. research at present. Machine learning involves feeding a large amount of data into a computer system, which the system then uses to learn how to perform a specific task. This process relies upon neural networks, which are interconnected layers of algorithms, which feed data into one another. Weighing the importance of input data between the layers trains the system to carry out specific tasks based on whichever layers are emphasized. One subset of machine learning is deep learning, which involves more expansive neural networks and greater amounts of data. Deep learning underlies the present leap in progress on teaching computers speech recognition. A third and final area of A.I. is that of expert systems. These systems first emerged in the 1950s, gaining entry into industrial fields in the 1960s and 1970s. Expert systems simulate the judgment and behavior of a human that has expert-level knowledge and experience in a specific field. In these systems, someone enters information to create the “knowledge base” and then programs an “inferential engine” of rules for applying this knowledge Credit: Stack Commerce base to a particular situation. Expert systems are increasingly important in aiding people with decision-making. They can provide permanent storage of knowledge from potentially unlimited expert sources and consistently compare options given the proper input. The applications of expert systems include analysis of chemical structures, medical diagnosis, and natural language processing. Current expert systems sometimes include machine learning components, which allow them to improve their performance with experience. The History of A.I. Computer scientist John McCarthy first coined the term “artificial intelligence” in 1956. However, the history of A.I. begins several years before this event. In 1945, engineer Vannevar Bush published an essay called “As We May Think” proposing a system that aides the development of people’s knowledge.3 Five years later, English mathematician Alan Turing published a paper called “Computing Machinery and Intelligence.”4 The paper opened by simply asking, “Can machines think?” Turing then proposed a way to answer such a question. Turing argued that if, to a human observer, a computer’s output is indistinguishable from an intelligent human, then this computer can think. This evaluation is now known as the “Turing test.” To this day, the types of goals researchers pursue, and the types of programs they create, are often formed with this test in mind. The Turing test not only provided a launching point for the field of A.I., but also a focal point around which the field has been able to develop. Researchers still debate whether it is humanly possible to produce a machine that will pass the test, and the question of when—or if—such a goal will be reached remains open-ended. The field of A.I. experienced rapid growth during the first few decades since its inception. This period saw the early forms of search algorithms and machine learning algorithms. People began integrating statistical analysis into understanding the © 2019, The Triple Helix, Inc. All rights reserved.

THE TRIPLE HELIX Winter 2019

7


world. However, most of the breakthroughs during the first fifty years were not noticeable in public life. Rather than building sentient robots like those portrayed in science fiction films, A.I. was employed in lower profile processes like analyzing economic patterns and purchase histories. These functions led to a growing distrust of A.I., as these systems were revolutionary problem-solvers that took over several areas of human activity. Furthermore, no A.I. was able to pass the Turing test even after decades of work, causing frustration.8 In the early decades of A.I., its unglamorous functions, inability to pass the Turing test, and the rise of the A.I. effect led to a mounting sense of disappointment and disregard for A.I. Beginning in 1974, the field entered a steep decline in research known as the first “A.I. Winter,” which lasted until 1980. The second major winter occured from 1987 to 1993. To continue receiving funds during A.I. winters, some groups simply rebranded their research as “machine learning,” “informatics,” or “pattern recognition.”5 The Future of A.I. At present, the field of A.I. is making leaps forward in research at a more rapid rate than ever before. Nonetheless, there remains one obstacle that has yet to be overcome: after sixty years, no A.I. system has passed the Turing test. However, a limited form of the test—which involves temporarily convincing a non-suspicious In the early human that a computer is a human—has decades of A.I., been passable for some time.5 To encourits unglamorous age further research toward this goal, functions, inabilthe Loebner Prize and the Turing Test ity to pass the Competition offer a $100,000 reward to any system capable of passing the full Turing test, and Turing test.6 the rise of the

Many people have proposed alternate versions of the Turing test. Most such alternatives narrow the scope of the test or shift the scope to a different area of study. One such variation was proposed by Nicholas Negroponte, co-founder of the MIT Media Lab. Unlike Turing, Negroponte defined a “thinking” computer as one that can collaborate with a human rather than simply pass an interrogation.7 This test would explore whether or not the machine could help the human reach his/her goals in the same way another human could.5 The variations on the Turing test illustrate a common area of debate in A.I.—“artificial” is easily definable, but “intelligence” is not.

A.I. effect led to a mounting sense of disappointment and disregard for A.I.

A.I. has continued to surge forward in the current era. In 2009, Google showed that a self-driving Toyota Prius could complete more than 10 journeys of 100 miles each, an achievement that could lead to the mass production of driverless cars in the future. In 2011, the computer system IBM Watson used natural language processes and analytics to win Jeopardy. In 2016, Google produced an A.I. that defeated a grandmaster in the Chinese strategic game of Go. A.I. is also experiencing rapid growth in its applications to speech and language recognition, facial recognition, and healthcare. In Oxford University’s Future of Humanity Institute’s survey of 8

THE TRIPLE HELIX Winter 2019

© 2019, The Triple Helix, Inc. All rights reserved.


hundreds of A.I. experts, predictions for the future of A.I. included A.I. writing human-like essays by 2026, driving trucks by 2027, surpassing human abilities in retail by 2031, writing a best-seller by 2049, and doing a surgeon’s work by 2053.2 The varied forms and functions of A.I., in addition to recent achievements in the field, inevitably give rise to concerns about the future of work. For instance, societies must grapple with the possibility of A.I. systems replacing some areas of modern human labor and altering the structure of the modern workforce. Some argue that A.I. will strengthen the workforce population rather than replace it.8 Others point out how A.I. can eliminate routine and repetitive tasks from human jobs, making them more efficient instead. Another major concern with A.I. is that it will likely soon be able to create realistic photographs and replicate a specific human voice. This creates the potential for an increasing distrust of the news and media, nonconsensual placement of someone’s image into videos such as splicing celebrities’ faces into pornography, as well as distrust of visual or auditory evidence in legal cases. Furthermore, the progress in facial recognition sparks questions about surveillance and privacy. Former F.B.I. director James Comey recommended that U.S. citizens cover their laptop cameras for this reason.9 Discussions about regulations on facial recognition software have arisen in response to this concern, and the state of Washington passed a law in 2017 banning companies from collecting biometric data on state residents without informing them and detailing how the data would be used.9 Beyond concerns over the current capabilities of A.I., society must continue to consider the benefits and the concerns of a future where general A.I. has been developed. Despite a plethora of fictional experiments and conversations, a consensus has yet to be reached. What is clear, however, is that the future of A.I. is as exciting as it is unknown. A.I. carries striking potential that can generate strong optimism and amazement, but it still produces concern and fear that is equally intense. Ultimately, in the words of artificial intelligence theorist Eliezer Yudkowsky, “the greatest danger of Artificial Intelligence is that people conclude too early that they understand it.”1 While a general understanding of A.I. is important to have in the modern age, one must realize that this brief overview of A.I. only skims the surface of this active and rapidly growing field. ■ Maggie Bader is a third-year double-majoring in Math and Comparative Human Development. Her academic interests lie in abstract math, adolescent development and psychology, secondary education, and math education pedagogy. If you can’t tell, she loves math and STEM education. When not in class, Maggie teaches 8th grade math, is a 3rd grade classroom aide, tutors high school calculus, and sings with the a cappella group the Ransom Notes. Maggie enjoys running along the lake, watching Disney channel movies, and exploring Chicago! References 1

Marr, B. 28 Best Quotes About Artificial Intelligence. Forbes (2017) https://www.forbes.com/sites/bernardmarr/2017/07/25/28-best-quotes-ab out-artificial-intelligence/

2

Heath, N. “What is AI? Everything you need to know about Artificial Intelligence.” ZD Net (2018)

3

Bush, Vannevar. “As We May Think.” The Atlantic. (1945) https://www.theatlantic.com/magazine/archive/1945/07/as-we-may-think/303881/

4

Turing, Alan. “Computing Machinery and Intelligence. Mind. (1950). https://www.csee.umbc.edu/courses/471/papers/turing.pdf

5

Brand, S. n.t. The Media Lab (1988)

6

Loebner Prize http://www.loebner.net/Prizef/loebner-prize.html

7

Smith, C. “The History of Artificial Intelligence.: University of Washington (2006)

8

Denecken, S. “How Artificial Intelligence Will Help Elevate the Human Workforce.” Forbes (2018)

9

Acohido, B.V. “New Boom in Facial Recognition Tech Prompts Privacy Alarms.” Threatpost (2018)

© 2019, The Triple Helix, Inc. All rights reserved.

THE TRIPLE HELIX Winter 2019

9


INQUIRY: BILL HUTCHISON Department of English Language and Literature

Margot Carlson

Hutchison with his robot collection. Credit: Bill Hutchison

“T

he first and greatest cruelty we’ve enacted against artificial intelligence

was to call it ‘artificial’ at all. There’s marginalization built into the very name.” So mused Bill Hutchison to me one breezy morning in October as we chatted over coffee in the Smart Cafe. Hutchison, an earnest and passionate PhD candidate in the University of Chicago’s English department, walked me through the questions guiding his dissertation on human-machine relations, which he hopes to complete this academic year. Fundamental to his research is the question, “What gets to count as what kind of life?” One can trace Hutchison’s interest in non-human life back to the eight years he spent working at an animal shelter in his thirties. He studied animal welfare while completing his undergraduate degree at the University of New Mexico and working at the Humane Society. His investment in animal ethics developed in college, where he began considering the relationship between animals and industry. As a student in the University of Chicago’s Masters of Arts Program in the Humanities,

10

THE TRIPLE HELIX Winter 2019

© 2019, The Triple Helix, Inc. All rights reserved.


Hutchison studied animals in Victorian science, and early in his PhD studies, he looked at the connection between surveillance pigeons used in the Second World War and today’s military drones, which he argues are highly-militarized, automated birds designed to serve the same purpose the pigeon once did. In investigating the link between the biological and the technological, Hutchison turned his attention to the existence and ethics of non-normative life forms. As Hutchison pointed out to me in the Smart Cafe, we humans have a history of marginalizing certain entities because we don’t consider them to be alive enough, or because we don’t see a good enough reason to recognize their In short, Hutchison's personhood. Only later do we realize work is motivated the detriment of such a perspective. Smiling into his coffee, Hutchison by the concern that offered the example of soil microbes: we are not counting while for centuries we believed that everything as "alive" the dirt was inanimate, we now that deserves to be know that microorganisms in the considered as such. soil are crucial to our understanding of Earth. In all likelihood, Hutchison explained, we could be missing the vitality present in plenty of other objects. In Hutchison’s opinion, it is better to recognize the possibility of life in the non-living than it is to do the opposite: to ignore or miss the presence of life in something important. In short, Hutchison’s work is motivated by the concern that we are not counting everything as “alive” that deserves to be considered as such. As a result, Hutchison feels a strong affection for the work of Alan Turing. Turing argued that, if a computer in conversation with a human can convince the human that it is not a machine, then it has achieved artificial intelligence. The Turing test proposes that human perception is an adequate standard for discerning the intelligence and livingness of non-biological beings. In other words, if we believe that a computer or an android is alive in the same way that a human is, then that computer is alive. Our perception is enough to make it so. While the Turing test is outdated by the standards of contemporary A.I. research, its principles still ring true for Hutchison. There is something profound in human beings’ ability to see and empathize with machines in both fiction and in real life. The compassion we feel for an anthropomorphized military robot (like Boston Dynamics’ dog-like robot Spot) and the warmth we feel toward the fictional A.I. androids peppered throughout our sci-fi narratives are not insignificant emotions. Just as perfected “artificial intelligence” might as well be called “intelligence,” © 2019, The Triple Helix, Inc. All rights reserved.

Boston Dynamics' Spot. Credit: Boston Dynamics THE TRIPLE HELIX Winter 2019

11


perhaps the simulation of life can be just as valuable as the real thing. Using this hypothesis as his starting point, Hutchison’s dissertation interrogates the dichotomous way that we experience mechanical beings. Either we view them as practically-people, or as cold, lifeless, and threatening. This polarity is evident in science fiction cinema: while audiences feel immense warmth for Forbidden Planet’s Robby the Robot, Star Wars’ C-3PO, and Star Trek’s Data, they fear the apathetic calculation of T-800 of The Terminator and HAL9000 of 2001: A Space Odyssey. Yet, even when we describe machines as monstrous, we do so in human ways. For example, Hutchison has been researching how the rhetoric surrounding laboring machines in factories and industrial settings resembles the language often used to dehumanize and marginalize immigrants: both machine and human are criticized for “stealing jobs” and “invading” the workforce. This rhetoric imbues industrial machines with something akin to autonomy, while rendering that autonomy malicious and imposing. Laboring machines have interests, but their interests do not align with ours. In positioning laboring machines alongside marginalized humans in the workforce, this “othering” rhetoric—which dehumanizes machines and humans alike—implies that machines are human-esque to begin with. In the course of his research into science fiction, Hutchison has encountered a new kind of android figure, which he calls the “corporate robot.” While most fictional A.I. machines are “thoughtful tools”­—robots with souls who are produced by corporations, such as the droids in WALL-E—there has emerged alongside them a robot that is likewise built by corporations but is comprised of specifically corporate interests. The “corporate robot” is the entity that embodies the corporation itself, and so, it has no concern for humanity’s well-being. Hutchison sees examples WALL-E. Credit: Pixabay (image), Disney (character) of these figures first emerging in the 1979 science fiction film Alien through the character of Ash, and more recently in its sequels in the character of David. Although the corporate robot looks like a human, it does not think like one; rather, it has the kind of self-preservation and self-reproduction drive that a corporation has. The corporate robot’s goal is the maximization and exploitation of available resources, and it views biological life as a kind of raw resource to be put to maximum use. The figure of the “corporate robot” indicates a relatively new kind of human anxiety that has emerged amidst late-stage capitalism. Alongside our ingrained fear of the “Other,” we now fear the growing power and autonomy of large corporations, whose recent classification as “legal persons” has disturbing and alienating implications. The monstrous entity of the corporation is a real and imposing threat in a way that the human “Other” never has been. In recent months, Hutchison has begun to reconsider his original hypothesis—that we can connect with technology the same way that we connect with each other. Hutchison has started to notice that his in-person conversations with his friends 12

THE TRIPLE HELIX Winter 2019

© 2019, The Triple Helix, Inc. All rights reserved.


and colleagues are far more stimulating than similar conversations that take place in virtual space are. Recently, while visiting New Mexico, Hutchison found himself surrounded by the landscape in which he grew up, away from visible signs of technology, and he found that this experience did indeed feel unique and important in a way that our technologically-dense world does not. Sitting with me over coffee, Hutchison furrowed his brow and explained to me his new speculation: while technology broadens our ability to connect with one another, the types of connections that we share over technology lack depth. The virtual intimacy that we can achieve with and through technology is remarkable, but at the end of the day we cannot escape the “fact of our own embodiment,” as Hutchison put it. Rather than imagining it, we must decide which mode of interpersonal communication is “better.” However, Hutchison thinks that finding a way to embrace all the various modes we have of connection is necessary to unifying our fractured attention and connection. Ultimately, we are embodied beings, and therefore, when we interact with each other directly, in our own bodies, we are experiencing a crucial expression of human connection—the kind that imbues us with the most energy and spirit, without abandoning the useful reality of virtual connection. This revelation—that embodied interactions seem to produce something precious and different, something unattainable through technologically-mediated and -oriented interactions—has upset the foundations of Hutchison’s dissertation. While he remains compelled by the life that we find inherent in the nonbiological, Hutchison cannot ignore the unique warmth of embodied human connection. Hutchison is not yet certain how he will unite these two apparently conflicting ideas. However, as our conversation in the Smart Cafe came to a close, he admitted that he hopes to turn the project into a book for popular audiences once he completes his dissertation. We can look forward to finding his thoughts in bookstores in a few years, when I’m sure the intimate interactions humans share with and through machinery will have only increased. ■ Margot Carlson is a fourth-year at the University of Chicago double-majoring in Cinema & Media Studies and Gender & Sexuality Studies and minoring in Human Rights. She is big into the sci-fi and horror genres and wants to write about them a lot. She thinks robots are going to take over the world and she's looking forward to that (although she expects she'll regret her optimism later). Readers can find Margot biking around campus listening to podcasts, singing or studying with her a cappella group The Ransom Notes, and burning her tongue on coffee in one of many cafes on campus.

© 2019, The Triple Helix, Inc. All rights reserved.

THE TRIPLE HELIX Winter 2019

13


A.I. IN REPORTING Journalism in the 21st Century

I

Alexa Perlmutter

n 2013, Harvard University’s Neiman Journalism Lab released a prediction that

journalism in 2014 would be “scooped by a reporter who knows how to program.”1 This quote is from Scott Klein, senior editor of news applications at ProPublica, who went on to argue for the increasing importance of programming and data analysis in investigative reporting. The new Neiman prediction for 2018 is that journalism will be “Scooped By A.I.,” a declaration made by John Keefe, a developer in the Quartz Bot Studio who specializes in the role of computer software in journalism. Keefe writes, “I’m not talking about computer-generated stories about earthquakes, earnings reports, or sports scores. These will be stories on your beat, written by humans who understand how to use machine learning to aid their reporting.”2 As these successive predictions imply, journalism is changing as technology is changing, and the use of machines and artificial intelligence in journalism is only increasing. While current A.I. have made the journalism of the 21st Century distinct from the journalism of the past, these technologies cannot displace the traditional news media altogether. Yet, they are undoubtedly irreplaceable tools that have dramatically improved the world of journalism, not only by enhancing efficiency and accuracy, but also by giving reporters new opportunities to speak truth to power. Automated journalism, or algorithmic journalism, wherein articles are generated through computer programs, is one basic way that news outlets are employing new technology. The New York Times, BBC News Labs, Reuters, The Washington Post, Yahoo! Sports, Associated Press, and The Guardian are all currently using automated journalism in their newsrooms. Many of these outlets use Natural Language Processing (NLP): tools which “read” at mechanical speeds and which can compile summaries of texts adhering to a specific formula, tone, or even political stance. These tools, combined with marketing technologies, can tailor news directly to a targeted audience.3 The Associated Press is currently using an NLP called Wordsmith by Automated Insights, which converts big data into comprehensible articles. Specifically, this technology is being used to make sense of quarterly financial reports released by public companies, the analyses of which are huge and time-consuming undertakings for any human. Before adopting Wordsmith to auto-summarize reports, the AP was producing only 300 financial summaries per quarter. Now, they are able to publish up to 4,400. It took approximately two months of inputting data and tweaking results to perfect the system, but now Wordsmith can take data from spreadsheets and produce formulaic sentences, such as, “Amazon.com Inc. on Thursday reported first-quarter net income of $513 million, after reporting a loss in the same period a year earlier.”4 Credit: I4J 14

THE TRIPLE HELIX Winter 2019

© 2019, The Triple Helix, Inc. All rights reserved.


Indeed, it is not overtly obvious that this sentence was written by a computer. Drawing from current language processing technologies like Wordsmith, there is an online quiz on The New York Times’ website titled “Did a Human or a Computer Write This?,” meant to test our ability to distinguish human-crafted sentences from computer-generated ones.5 Sentences in the quiz are not just excerpts from news articles, but also from more creative genres, including poetry. The quiz is challenging, as it suggests to users that much of what we read could be generated through NLP, and we may never give it a second thought. Back at AP, Wordsmith has freed up the equivalent of three employees who no longer have to sort through data manually.4 Technologies like Wordsmith are currently being used to cover financial reports and sporting events like the 2016 Olympics, as well as Data journalism. Credit: jwyg (Flickr) election results. Of course, this technology gives companies like AP the ability to downsize their staff substantially, and while, perhaps, staff reduction lies in the future of many large news outlets, AP has maintained that its displaced reporters are now freed to work on more substantial projects. In addition to automated journalism, AP has also begun using NewsWhip, a media analysis tool that offers “complete coverage of the most engaging publishers, writers, and influencers on any topic across web and social platforms.”6 These twentyfirst century marketing tools are used widely by journalism outlets such as The Washington Post, Condé Nast, and TIME, in addition to AP. This technology tracks news stories and audience engagement and gives updates to reporters.6 This form of A.I. engages with media to offer reporters new data not only about world news, but also about its reception, so that they can make informed decisions about the best ways to deliver that news to the public. Keefe’s hypothesis for 2018 goes beyond simple automated journalism and marketing. His theory suggests a limitless future of A.I. in journalism that can give the public new windows into the world. He cites, for example, a machine learning classifier that is used to predict whether a tweet sent out from Donald Trump’s Twitter account is actually written by Trump himself. The classifier works by using Trump’s past tweets to create a buzzword database, then analyzes new tweets to determine if they were written by him or by White House staff. According to staff at The Atlantic, the accuracy of the algorithm is high, though already out-of-date, as President Trump’s tweets have become more and more distinct from those of Presidential Candidate Trump.7 Even so, this computer-generated data has given the public a new and more accurate way to evaluate our current president. Similarly, reporters in Georgia built computer programs to go through hospital records, collecting more than 100,000 disciplinary documents related to potential sexual misconduct by doctors. Machine learning code, similar to the tweet classifier, then used keywords to produce a prediction as to whether documents were related to real incidences of physical or sexual misconduct. Investigative reporters then went on to read all of the relevant documents before publishing their findings. By using A.I., these reporters narrowed their search from 100,000 potentially related documents to 6,000 relevant ones. With this information, reporters exposed that © 2019, The Triple Helix, Inc. All rights reserved.

THE TRIPLE HELIX Winter 2019

15


doctors were continuing to practice after having been involved in sexual misconduct with patients.8 Countless similar examples exist of journalists making use of new data technologies for investigation and discovery. But, according to “Artificial Intelligence: Practice and Implications for Journalism,” a 2017 report published by the Columbia School of Journalism, the Brown Institute for Media Innovation, and the Tow Center for Digital Journalism, no data can be completely neutral and free of bias. Even choosing a dataset for a computer to analyze injects subjectivity into the study. In that way, giving A.I. any data could compound ethical problems that human journalists might be more likely to see if they were to do the analysis by hand. However, as of now, that problem is for the most part curtailed because journalistic A.I. tools are classified as “supervised learning,” meaning that humans, not the algorithms, are directing how situations should unfold; thus, there is no technologically-created intent.9 The “Artificial Intelligence” report also notes that A.I. tools like Wordsmith “can help journalists tell new kinds of stories that were previously too resource-impractical or technically out of reach,” but warns that the “knowledge gap and communication gap between technologists designing A.I. and journalists using it” could be problematic.10 In the case of Wordsmith, AP reporters worked together with Automated Insights for the full two-month onboarding period to monitor the new technology.11 This collaboration is necessary for all new technologies in the journalism sphere and is essential to ensuring that the A.I. meant to improve the quality of journalism does not end up diminishing it. At least for now, A.I. in the newsroom is heavily supervised and used primarily to aid reporters with data collection and delivery. Its most important role lies in its ability to make big data palatable for reporters and accessible to readers. For daily news outlets as well as investigative ones, NLPs and coding technologies ensure that reporters can continue to fulfill the public’s insatiable desire for real-time updates without sacrificing accuracy, allowing reporters to responsibly undertake even the toughest of journalistic projects. As A.I. improves in the coming years, discussions must continue about the best and most ethical ways to continue using these technologies. ■ Alexa Perlmutter is a second-year student at the University of Chicago. She is an English major, a writer for The Chicago Maroon, and active in UChicago's Institute of Politics. References

16

1

Klein, S. Scooped by code. Predictions for Journalism 2014: A Neiman Lab Series. http://www.niemanlab.org/2013/12/scooped-by-code/. (2013).

2

Keefe, J. Scooped by A.I.. Predictions for Journalism 2018: A Neiman Lab Series. http://www.niemanlab.org/2017/12/scooped-by-A.I./. (2017).

3

Hansen, Mark. et al. Artificial Intelligence: Practice and Implications for Journalism. Tow Center for Digital Journalism, Columbia University. New York. https://doi.org/10.7916/D8X92PRD. (2017). p. 12

4

Faggella, D. News Organization Leverages A.I. to Generate Automated Narratives from Big Data. Automated Insights and Associated Press. https:// www.techemergence.com/case-studies/Automated-Insights/news-organization-leverages-A.I.-generate-automated-narratives-big-data/ (2018).

5

Did a Human or a Computer Write This? New York Times. https://www.nytimes.com/interactive/2015/03/08/opinion/sunday/algorithm-human-quiz.html. (2015).

6

https://www.newswhip.com

7

McGill, A. A Bot That Can Tell When It’s Really Donald Trump Tweeting. The Atlantic. https://www.theatlantic.com/politics/archive/2017/03/abot-that-detects-when-donald-trump-is-tweeting/521127/. (2017).

8

How the Doctors and Sex Abuse Project Came About. The Atlanta Journal-Constitution. http://doctors.ajc.com/about_this_investigation/. (2017).

9

Hansen, Mark. et al. Artificial Intelligence: Practice and Implications for Journalism. Tow Center for Digital Journalism, Columbia University. New York. https://doi.org/10.7916/D8X92PRD. (2017). p. 17

10

Hansen, Mark. et al. Artificial Intelligence: Practice and Implications for Journalism. Tow Center for Digital Journalism, Columbia University. New York. https://doi.org/10.7916/D8X92PRD. (2017). p. 2

11

Faggella, D. News Organization Leverages A.I. to Generate Automated Narratives from Big Data. Automated Insights and Associated Press. https:// www.techemergence.com/case-studies/Automated-Insights/news-organization-leverages-A.I.-generate-automated-narratives-big-data/ (2018).

12

Hansen, Mark. et al. Artificial Intelligence: Practice and Implications for Journalism. Tow Center for Digital Journalism, Columbia University. New York. https://doi.org/10.7916/D8X92PRD. (2017). p. 12

THE TRIPLE HELIX Winter 2019

© 2019, The Triple Helix, Inc. All rights reserved.


INQUIRY: DR. JAMES EVANS Department of Sociology

Neha Lingareddy

T

Credit: Stanford University

he social sciences have long been concerned with understanding human behav-

ior and relationships. In the wake of developments in artificial intelligence, social science researchers have begun to use machine learning techniques to analyze large scales of data for their research. Dr. James Evans of the Sociology Department is one such researcher who leverages machine learning techniques to understand and represent the complexities of knowledge. Dr. Evans talked to me over Skype from China, where he had been presenting his research at the e-commerce company Alibaba. He directs The Knowledge Lab at the University of Chicago, a group which studies “the dynamics that shape human understanding, investigation and certainty.”1 The formation of knowledge that the group investigates is a complex system built on others’ collective certainty about things. It belongs not just to humans, but also to nature and animals. Dr. Evans’ research broadly covers areas of knowing: attention, intuition, innovation, certainty, and human understanding. Recently, he has been focusing on the effect of social and technical institutions on these systems of knowing. © 2019, The Triple Helix, Inc. All rights reserved.

THE TRIPLE HELIX Winter 2019

17


While Dr. Evans was in graduate school studying sociology, he became interested in applying machine learning techniques to his research. The technique he emphasized most as he talked about his research is an unsupervised one: drawing insights from unlabeled data using autoencoders. Autoencoders are an artificial neural network, which is a biologically-inspired network consisting of input, hidden and output layers.2 This network allows a computer to ‘learn’ by simplifying input. Autoencoders, in particular, work by condensing their input data through the correlations they discover as they learn. To explain autoencoders, Dr. Evans gave the example of predicting how people perceive the gender of a noun. If we think of a matrix with nouns as the rows, and the columns as the perceived gender of these nouns, then using autoencoders, we can condense large amounts of data on these nouns and their genders based on common features found by the computer, and make predictions about the perceived gender of the nouns. Dr. Evans explained that autoencoders and other machine learning techniques are “tools which allow researchers to uncover more while knowing less,” as researchers now need to know less about the specifics of the data they are analyzing, but are still able to uncover relationships and insights. The Knowledge Lab comprises an academically diverse group working on a range of projects on systems of knowledge. As the Director of this Lab, Dr. Evans has three main aims for the direction of their research. First, the lab looks at rich new ways to understand knowledge through data. This involves data mining, and using and developing machine learning techniques to understand knowledge as a complex system. Second, the lab intends to represent knowledge, specifically biases and knowledge system dynamics, on a big scale. Third, the lab hopes to translate The formation of those unearthed biases and dynamics knowledge that the into statistical information to help A.I.s group investigates learn better. Dr. Evans thinks that “if we is a complex system can’t do better by taking into account these biases, we are not sure if we really built on others' collective certainty understood them in the first place.” In this way, the Knowledge Lab’s work about things. contributes not just to sociology, but also to technology. The Knowledge Lab is currently wrapping up a study on how people with different opinions work together. Dr. Evans’ team began their investigation by assessing the likelihood that certain people edit a political Wikipedia page. The lab used edit histories on political pages to build a model of political leanings. To do so, the lab used semantic embeddings, which are mappings from words to vectors, and autoencoders. This model allowed the researchers to understand and predict how people’s political leanings could cause them to edit pages and collaborate with others. Other than analyzing edit histories to build this model, the lab also looked at the Wiki ‘talk page’ discussions. They are currently surveying and interviewing people to cap off their investigation Autoencoder. Credit: Curiosily on how people talk about issues 18

THE TRIPLE HELIX Winter 2019

© 2019, The Triple Helix, Inc. All rights reserved.


and why they edit pages. The Knowledge Lab has done several other studies uncovering various systems of knowledge, such as studying idea generation, representations of knowledge, and the influence of tradition on knowledge generation, all by using and improving machine learning methods. In the future, Dr. Evans sees improvements in A.I. affecting his research by opening doors to new insights. He is particularly excited about how computer scientists are starting to apply old ideas from geometry in ways that help refine the unsupervised methods in machine learning. These developments will mean that researchers can more easily simplify large amounts of data to make sense of what it means, and he believes that “knowing the answer to that is so powerful and exciting right now,” as there is potential for these inDr. Evans' research in sights to influence how the world makes sociology lies somewhere decisions and shapes between 'data epistemology,' its future. 'sociology of knowledge,' This use of machine and 'knowledge studies.' learning techniques has created interesting intersections of various fields in the social sciences, one of the many ways that technology has been blurring the lines across academia. According to Dr. Evans, his research in sociology lies somewhere between ‘data epistemology’—the use of large-scale data to make sense of how knowledge happens—‘sociology of knowledge’—the study of human knowledge in a social context—and ‘knowledge studies’—the growth of knowledge in response to limitations. Dr. Evans finds this intersection fascinating, and sees it as a field of sociology that has the potential to influence the world’s understanding of itself.

Dr. Evans also works with other humanists and social scientists on projects and is excited about the greater impact of machine learning in these areas. In addition, as the Director of the Computational Social Science program, Dr. Evans often encourages researchers to apply machine learning to their academic disciplines. He believes machine learning will allow these traditionally subjective fields to become more quantitative. For instance, he explained, when analyzing a novel, a researcher in the humanities can now easily use permutation methods to provide different interpretations of the words present in the novel, which is unlike any of the traditional analytical techniques. Dr. Evans believes that this different way of approaching the social sciences and humanities is bringing research to new and exciting places. ■ Neha Lingareddy is a second-year double majoring in Computer Science and Mathematics, with a possible minor in Computational Neuroscience. She is interested in both biological and computational learning, and works in a theoretical neuroscience lab. On campus, she's involved with APO, Splash! and her house. She likes spending her free time listening to podcasts (Hidden Brain, anyone?), and visiting the many museums and restaurants of Chicago. References 1

The University of Chicago Knowledge Lab. “About.” https://www.knowledgelab.org/about/

2

Jordan, Jeremy. “Introduction to Autoencoders.” 19 Mar. 2018, www.jeremyjordan.me/autoencoders/.

© 2019, The Triple Helix, Inc. All rights reserved.

THE TRIPLE HELIX Winter 2019

19


SORRY, WEBMD Artificial Intelligence and Medicine

G

Ellie L. Frank

oing to the doctor sucks. Long waits, inconvenient appointment times, and

short-staffed medical facilities are only a few of the constant plagues of the healthcare system. Patients might be better served if they could type their symptoms into an app and immediately get a diagnosis—which WebMD has tried and failed to do (no, not everything is cancer). Many areas of the world suffer from a lack of medical professionals and continually struggle to supply the growing demand for care. Medicine must adapt and innovate in order to meet the mounting healthcare needs around the globe. Because of this sustained demand for innovation, artificial intelligence experts have viewed healthcare as a perfect field for the applications of their new technologies.

Credit: Radiology Business

A.I. and medicine have been intertwined since the early 1970s. Programmers at Stanford developed MYCIN, a program written in the programming language Lisp, to help doctors diagnose and decide treatment courses for infectious bacterial diseases.1 Like MYCIN, many of today’s A.I. programs are geared towards diagnostics. Algorithms can analyze huge amounts of data in the forms of health records and genetic information to compile diagnostic suggestions. Google Brain, Google’s A.I. division, has a healthcare team which has successfully developed multiple algorithms that can perform visual diagnoses.7 One helps pathologists visually detect cancer from tissue samples, while another assists ophthalmologists in diagnosing diabetic eye disease.2, 3 These programs can analyze images and identify abnormalities to provide diagnoses with relatively high success rates. People are used to seeking a second opinion—imagine getting it from your laptop instead of another doctor! 20

THE TRIPLE HELIX Winter 2019

© 2019, The Triple Helix, Inc. All rights reserved.


However, Google Brain, and many other A.I. groups advancing into medicine, specifically say that their programs should supplement—not replace—medical professionals. The programs lack the experience and intuition of human physicians and are unable to diagnose medical issues outside of their programming or training.2 Even with MYCIN, the goal has been to help the doctor determine the disease, not to Credit: MedInAction determine the disease without the doctor. A.I. has the potential to revolutionize the medical field, but you will not be going to a robot doctor instead of a human physician anytime soon. A.I. furthered diagnostics when it first became involved in healthcare and has since expanded its scope. Google Brain, along with the other major A.I. healthcare players, also focuses on health data—an important and rapidly expanding field. Specifically, Google Brain has partnered with many institutions, including University of Chicago Medicine, to create multiple A.I. programs. These programs use healthcare data to identify patterns, predict specifics, such as length of patient stay, and avoid potential problems so that patients can get the best possible care.6 Google Brain and Watson Health, IBM’s healthcare branch, are both working on programs related to genomics, the study and mapping of the genome.4, 5 The fields of diagnostics, genomics, and healthcare data analytics—as well as other areas of medicine—are in the middle of an A.I. revolution. The recent developments in medicine via A.I. have been astounding, and new innovations are researched and developed every day. A.I.’s goal is to mimic human intelligence, and many types of A.I. mimic the human brain in its function. What if that same A.I. was applied to the brain itself? The technology for developing artificial limbs has improved by leaps and bounds, but even if the robotics of the limb are perfect, without a method for control, the limb will not function. New programs are in development to create brain-computer interfaces that allow machines to interact with signals from the brain and transmit that information to prosthetic limbs.7 With the assistance of A.I., these complicated brain signals may one day maneuver artificial limbs and help patients regain movement otherwise lost to them. A.I. programs can scrutinize photographic results with greater detail than the human eye and have the potential to change diagnostics all together. Biopsies are a common diagnostic tool for physicians, but what if the tissue needed to be examined is impossible to remove? For example, many cancerous tumors, such as those in the brain, are hard to biopsy. Furthermore, the small piece of tissue removed may not correctly reflect the pathology of the whole tumor. A.I. programs can provide answers where biopsies cannot by using detailed analysis of radiological images, such as those from MRIs or PET scans, that is beyond the ability of the human eye.7 For example, Google is working on a program that can diagnose diabetic retinopathy with the same accuracy as an ophthalmologist.3 Their team is hoping to improve the program until it surpasses the doctors themselves. The human eye, though amazingly comprehensive, cannot see the same amount of detail as a computer program scanning a photo pixel by pixel. TV shows often portray hospital rooms with multiple beeping machines hooked up © 2019, The Triple Helix, Inc. All rights reserved.

THE TRIPLE HELIX Winter 2019

21


to patients. Today, the dreaded mechanical “beeeeeeep” marks Machines monitor the end of a life. Machines patients or administer monitor patients or administer therapies and signal therapies and signal healthcare healthcare professionals professionals when an issue arises. What if the machine when an issue arises. could take care of the issue What if the machine itself? Besides lowering rates could take care of the of false alarm and connecting issue itself? the multiple machine interfaces for easier viewing, A.I.-based medical machines may be able to address a medical problem as well as signal for help. For example, if a ventilator knows when its rhythm is off or senses a pathology, it can diagnose and correct the problem itself.7 If the machine knows a patient’s medical details—age, height, weight, illness, etc.—and it has access to the normal breathing pattern of the patient, then based on this information, the machine can recognize abnormal breathing rates and self-correct by either decreasing or increasing its output. A.I. can help create an environment with integrated, self-correct machinery and holds potential to radically change the way hospitals currently run. A.I.-based medical machinery is a rich field for future research and development. New applications for current and developing A.I. programs continue to arise. A.I. programs could be a crucial tool in areas with a shortage of medical professionals.7 Even in places with a sufficient number of health professionals, the cost of healthcare deters many from seeking medical help. A.I. innovations, which enable streamlined healthcare and lower costs, could allow a larger population of those in need to access necessary medical attention. Ultimately, A.I. research applied to medicine and healthcare can provide support to a field facing new and more complicated challenges every day. ■ Ellie L. Frank is a third-year Pre-Med at the University of Chicago majoring in English. She enjoys brevity. References

22

1

Mathew, R. J. Notes on MYCIN. Stanford Libraries. https://exhibits.stanford.edu/feigenbaum/catalog/nt215ps9486. (1977)

2

Stumpe, Martin. et al. Assiting Pathologists in Detecting Cancer with Deep Learning. Google A.I. Blog. https://A.I. googleblog.com/2017/03/ assisting-pathologists-in-detecting.html (2017)

3

Gulshan, Varun. et al. Development and Validation of a Deep Learning Algorithm for Detection of Diabetic Retinopathy in Retinal Fundus Photographs. Journal of the American Medical Association. https://static.googleusercontent.com/media/research.google.com/en//pubs/archive/45732.pdf (2016)

4

DePristo, Mark. et al. DeepVariant: Highly Accurate Genomes with Deep Neural Networks. Google A.I. Blog. https://A.I. googleblog.com/2017/12/ deepvariant-highly-accurate-genomes.html (2017)

5

IBM Watson for Genomics. IBM. https://www.ibm.com/us-en/marketplace/watson-for-genomics.(Accessed 2018)

6

Chou, Katherine. Partnering on Machine Learning in Healthcare. The Keyword. https://blog.google/technology/ai/partnering-machine-learning-healthcare/. (2017)

7

Klibanski, Anne. et al. Disruptive Dozen. 2018 World Medical Innovation Forum. https://worldmedicalinnovation.org/wp-content/uploads/2018/11/ Partners-FORUM-2018-BROCHURE-D12-AI-180530_1102-FREV2-FOR-WEB-X3-SPREADS.pdf. (2018)

THE TRIPLE HELIX Winter 2019

© 2019, The Triple Helix, Inc. All rights reserved.


INQUIRY: DR. MARC DOWNIE Department of Cinema and Media Studies

Charlotte Soehner

Still from After Ghostcatching (2010). Credit: Film Studies Center of The University of Chicago

L

ike all fields in the 21st century, art has evolved alongside technology. Artists

rely more and more on digital images, laser beams, machines, data, and, most notably, artificial intelligence. Dr. Marc Downie, a lecturer in the University of Chicago’s Department of Cinema and Media Studies, is one of these digital artists who uses hardware, computations, and algorithms to create art. He produces pieces with the use and assistance of artificial intelligence. Dr. Downie grew up during the optimistic first wave of new media, when digital innovation had just been introduced to the art scene. He contributed to the early efforts of artificial intelligence researchers, earning his PhD from MIT in the 1980s. A self-described “official card-carrying petitioner of A.I.,” Downie has been making art on computers for a long time. Untrained in traditional art forms of drawing and painting, Dr. Downie instead writes code and employs computational techniques to create his artwork. He coaxes computers into reimagining originally human phenomena, such as dance and music. Much of this work involves “tricking” computers into producing images. His art engages with what happens when a computational idea meets a real idea, how that encounter is staged, and what visual product is formulated by the computer’s algorithms. By writing his own specific code, the artist entrusts the computer with producing an image that emulates his own thought process, without being exactly sure what will appear. Dr. Downie was initially drawn to A.I. in its nascent years. At that time, computers were less powerful and the artist had more decision power in what images A.I.-driven art produced. With the recent development of deep learning, computers have gained

© 2019, The Triple Helix, Inc. All rights reserved.

THE TRIPLE HELIX Winter 2019

23


an incomprehensible amount of agency. As Dr. Downie stressed, there now lies an almost worrying amount of incertitude in what the computer will create on its own. Technological autonomy defines artificial intelligence: once the connection on a computer is established between an algorithm and the dance, the data set, or the ancient artifact, there is a sense that the machine will simply take over. In art, this agency can grow dangerous because the maker’s initial ideas could be completely reshaped by the computer’s own mechanized production. In this threat lies an exciting prospect: artificial intelligence contributing to art offers the viewer the perspective of the artist alongside that of the A.I. To ensure a measured amount of control, Dr. Downie pays extremely careful attention to his code. Dr. Downie uses very little deep learning, a type of artificial intelligence heavily reliant on data and useful for fields such as computer vision, speech, visual and audio recognition, social network filtering and bioinformatics. Deep learning is currently extremely profitable in the tech world and is essential to organizations such as Facebook. Since digital art is rarely so data-centric, Dr. Downie has yet to find use for deep learning. One of his challenges is to discover how new techniques in A.I. could further his work.

Still from Loops (2001-11). Credit: OpenEndedGroup.com

Another challenge of older digital artists is to reconcile with an entire community of computer vision which is being erased due to deep learning. Intellectual traditions of more antiquated artificial intelligence have been relegated to history in Dr. Downie’s lifetime, as recent A.I. developments are rooted in more data-hungry strategies used for facial recognition or for GPS systems. These new developments often overlook culturally significant images, such as film or street photography. Downie is working in an age where computers can visualize just about anything, but cannot access archives of artistic images. Most of Dr. Downie’s pieces are commissioned, ranging from public works to projects centered on choreographed dances or films. His methodology is diverse, including photogrammetric reconstructions of real spaces, motion capture video, and non-photorealistic 3D rendering. Most of his pieces are collaborative and produced by the collective, OpenEndedGroup, which is comprised of Dr. Downie and fellow artist Paul Kaiser. In describing his 2001 work Loops—his first work that was “worth saving from a fire,”­—Dr. Downie emphasized how the deployment of artificial intelligence can solve artistic struggles and create a product that seems possible only in one’s imagination. Loops is based on motion capture performance, using data sets that 24

THE TRIPLE HELIX Winter 2019

© 2019, The Triple Helix, Inc. All rights reserved.


coalesce into an ornate display of lights and sound. Merce Cunningham’s choreography generates the imagery, while his own voice is synthesized into a piano piece by composer John Cage through autonomous musical intelligence. Downie enjoyed using material that surprised him as an artist while also being relatively controllable. This duality made the space for something delightfully unexpected to happen by the computer’s own choice. While the integration of technology into art may seem new and potentially damaging to the profoundly human component of creation, Dr. Downie emphasizes that the use of modern technology fits well with the tradition of artists embracing doubt and newness. Historically, composers, painters, and choreographers have experimented with original, odd strategies to alter traditional forms of self expression. Technology also has the potential to re-explore ancient pieces. Maenads and Satyrs was a piece created by OpenEndedGroup in 2015. A triptych of 3D imagery placed next to a Roman sarcophagus, Still from Maenads and Satyrs (2015-18). it displays the carved figures in vibrant, danc- Credit: Dr. Marc Downie ing motions accompanied by a solo cello and electronics composition. What is considered old can be dragged through centuries and transformed into something relevant through artificial intelligence. Dr. Marc Downie is neither an image maker nor recreater, but instead a self-proclaimed image finder. He coaxes algorithms into expressing something that both makes sense to the human eye, and draws it in with a new perspective. Much of Dr. Downie’s purpose in making art rests in the ultimate aesthetic impact that comprises not only visual satisfaction, but also how profound the piece is and how it makes the audience feel. The success of a piece is independent of how complicated the technology behind it is, and the technological intricacies of Dr. Downie’s digital art are rarely exhibited in the foreground. Dr. Downie is more concerned with producing beautiful images than with showcasing his technological abilities. In the artist’s words, his work is about making the audience more aware of the compromises, trade-offs, and peculiarities of how their visual system approaches objects. Artificial intelligence redefines the role of the individual and restructures the meaning of art. Part of Dr. Downie’s brilliance lies in his overt attempt to create a meaningful relationship with the viewer. Dr. Downie’s work merges together the ancient and the modern, making both the old and the new relevant and engaging in our evolving technological world. ■ Charlotte Soehner is a second-year at the University of Chicago, intending to double major in Fundamentals and History. She is interested in the historical and social development of modern human rights norms, as well political dissidence movements in East Asia. On campus, Charlotte is involved with the Institute of Politics, the French Club, and various civic engagement efforts. © 2019, The Triple Helix, Inc. All rights reserved.

THE TRIPLE HELIX Winter 2019

25


THE QUANTUM MYSTIQUE An Uncertain Future of A.I. and Quantum Computing

Q

Rory Frydman

uantum computing

will not lead to artificial intelligence.

Ongoing research into quantum computing is promising. But, promising does not mean all-fulfilling. Among researchers, journalists, and especially popular science fanatics, there is a certain tendency to attach unrealistic hopes to emerging technologies. Quantum computing is perhaps the most susceptible to this optimistic blindness. Its very name lends itself to these vague dreams—no one understands what “quantum” means, no one really understands what computing is, and there is some rarefied intrigue about words that start with a Q (fun fact: Q is the second least used letter in the English language, and the third-least common letter used to start a word).1, 2 Dwelling in this land of nebulous possibility, some make the claim that quantum computers are limitless: they will be able to prove unprovable mathematical concepts like the million-dollar P-NP problem, solve world hunger through simulations of chemical interactions of fertilizers and soil, cure cancer by modelling how the human body would interact with new drugs, or create artificial intelligence. Unfortunately, no dice. To begin, what even is a quantum computer? Or, even more fundamentally, what even is quantum physics? A definition of quantum physics is best given through a comparison to classical physics. In articles from the Wall Street Journal to Wired, there are myriad different explanations, analogies, and anecdotes that try to explain the differences in one simple sentence. All of them are just a little wrong. So, here’s another just-a-little-wrong explanation—classical physics deals with the way things observably are: quantum physics deals with the probability of observing things the way they are. Or, in simpler terms, classical is definite, quantum is probable. How does this difference manifest itself? Consider the following situation: a coin toss. Classically, if the initial conditions are set, then the way the coin lands is entirely predictable—definite. Of course, it would require a lot of information—the velocity of the coin, the force of gravity, the air flow in the Classical and quantum physics in coins. Credit: Intel room, the orbit of the Earth. Still, a classical model would use all of that information to predict how the coin would land. Quantum mechanically, even with the initial conditions known, the coin would have no predetermined outcome. The coin, as it is falling, would be both heads and tails until we measured it. There is no way to know what exact 26

THE TRIPLE HELIX Winter 2019

© 2019, The Triple Helix, Inc. All rights reserved.


state—heads or tails—it would be measured in. To understand why, here are two quick definitions: Superposition (noun): the overlaying of multiple states at once (coin is heads and tails). Wavefunction collapse (noun): the collapse of multiple states into one single state. As the coin is falling, it is in a superposition of the heads and tails states—not in either, but in both. With measurement, the coin’s superposition will collapse into one state or the other with a certain probability. In essence, the coin will have no determined state until its state is measured, and only once it is measured can its state be determined. Finally, another definition that requires another analogy: Interference (noun): the reinforcement or weakening of certain states. Imagine two pebbles being dropped into a pond. They make ripples. Once these ripples meet, there will be places in the water where the ripples are made larger (reinforced) and places where the ripples flatten each other out (weakened). This is interference. It might be a quantum leap to imagine that these ripples of water are instead probability functions, and that the bigger ripples and smaller ripples are actually increased and decreased probabilities, so let’s leave it there. Just remember this: the coin’s heads-or-tails probability can be changed by interference. There is a final concept that is perhaps the most important to quantum computing, and perhaps the most difficult to understand. Even Ripples interfering. Credit: Chip Coffey Albert Einstein, baffled, gave this concept the name “spooky action at a distance.” This all-important concept, entanglement, is the “unfactorability” of a system’s quantum state into the individual particles’ quantum states. When two particles are entangled, whatever you do to one affects the other. In the coin analogy, it would be as if you had two coins. Whenever the first came up heads, the second would have to come up tails, and whenever the first comes up tails, the second would have to come up heads. With that easy (right?) and fun (right!) understanding of quantum mechanics, let’s translate that coin analogy into something a little more formal for a discussion of quantum computing. The coin is a qubit—the quantum version of a computer bit. So, while classical computers process information using ones and zeros, quantum computers use qubits. Qubits, just like bits, are two dimensional systems, taking on values of zero and one: however, due to superposition, qubits can take on combinations of zero and one. Furthermore, qubits can be entangled, meaning that a single computer operation can act on all qubits at once. This entanglement allows quantum computers to perform a bunch of computation with just one operation. Compare this to a classical computer, which, once you get down to it, is performing an operation on each bit in sequence. Researchers have devoted decades to make it © 2019, The Triple Helix, Inc. All rights reserved.

THE TRIPLE HELIX Winter 2019

27


appear as if the operations on bits are happening simultaneously, but fundamentally they are sequential. Meanwhile, quantum computers can perform an operation on all their qubits at once. So, quantum computers with even a few qubits could give fast solutions to problems that classical computers would require innumerable normal bits to solve. By a clever application of interference, quantum computers can reinforce the correct solutions and weaken the incorrect ones.

So, quantum computers with even a few qubits could give fast solutions to problems that classical computers would require innumerable normal bits to solve. In theory, that is. Quantum computers are difficult to realize. There are already working quantum computers that utilize quantum mechanics in their design. However, these computers are no better than your average laptop. To reach the so-called “quantum advantage,” the point at which quantum computers outperform classical ones, would require many more qubits than are currently feasible. One of the main problems that quantum computers face in increasing their number of qubits is maintaining qubits’ “quantumness.” Qubits are prone to losing their quantumness. This is because qubits must not interact with the outside world to work properly. Even a slight breeze can cause a qubit to become nothing more than a boring particle or circuit. At low enough temperatures and with few enough qubits, this loss of quantumness is somewhat preventable. However, with many qubits, it becomes nearly impossible to maintain the environment required for attaining a quantum advantage (think of the relative ease of babysitting one kid versus the nightmare of caring for thousands at once). Addressing the limitations of many-qubit systems is at the forefront of current research efforts in quantum computing, so quantum computing enthusiasts will have to wait and see whether this barrier can be broken. Even ignoring the fact that current quantum computers are still exceeded in capability by some smart fridges, there is nothing that distinguishes quantum computers as especially useful for artificial intelligence. Perhaps the only task which experts are sure quantum computers will revolutionize is cryptography. To this end, quantum computers are especially suited—and countless mathematicians, physicists, and computer scientists have said as much.3 Certainly, quantum computers might lead to better artificial intelligence through more efficient machine learning algorithms. Machine learning relies on using a bunch of data to teach an algorithm how to sort information (a mathematician’s understanding), and, since quantum computers are able to so efficiently compute huge amounts of data, they seem especially suited for one another. However, in terms of artificial intelligence, classical computing may be just as good as quantum computing. It may well be that intelligence itself operates classically. To see why, look inwards. Transitioning away from the artificial to the natural, the human brain is still not completely understood. Neurons, neurotransmitters, action potentials, gray matter, white matter—all of these build a working picture of how the brain works. Yet, despite the model already being on the order of cells and individual molecules, 28

THE TRIPLE HELIX Winter 2019

© 2019, The Triple Helix, Inc. All rights reserved.


this understanding is still relatively big picture. Certain scientists have pondered the brain’s even more foundational workings.4 At scales smaller than single molecules, a study of the brain lends itself to the field of physics. And, physically speaking, there are two possible models: classical or quantum. The very way human synapses fire relies on the physics of their constituent particles: ions and electrons. Those ubiquitous messengers which moderate every chemical interaction are fundamentally governed by the laws of quantum physics. That means these particles are occasionally found in superpositions, which can collapse unpredictably into certain states. This unpredictability guarantees an element of non-determinism in the way human brains work. Perhaps this non-deterministic model is the basis of true intelligence? On the other hand, it must be said that everything—every breeze, every ray of light, every glass of water—is comprised of particles that rely on quantum mechanics to describe their behavior. The fact of the matter is that most of these things still act classically. This is a fact of physics: as quantum situations approach classical limits (i.e. as things get bigger), the quantum models must approximate classical models. This is known as the correspondence principle. A brain (~1.35 x 1026 atoms) is many times larger than a single particle, and so reasonably it should approximate a classical system. Why, then, would a brain be better described by quantum model than a classical one? If the brain is classical, then it is deterministic. And what then of humanity’s free will? That is, if every stimulus is accounted for, and the brain is classical, then the output of the brain can be determined at every moment from then on. To some, this determinism might be a soothing thought. For others, it might be even more of reason If the brain is to believe (or hope) that the brain is, in classical, then it fact, quantum.

is deterministic.

It is hard to speak with authority on the quantum-classical brain debate without further research. Scientists do, however, have speculations. Henry Stapp and Eugene Wigner, both prominent physicists, believed that human intelligence could be attributed to quantum mechanics.5, 6 Max Tegmark, a cosmologist, believes that the brain is definitely classical.4 For now, despite these interesting ideas, the consensus leans toward a classical model for the brain (sorry, free will). Does artificial intelligence have to be modelled on the brain? Of course not! It may be the very difference that will one day give the artificial the advantage over the natural. Pierre Simon Laplace, an 18th Century French scholar, said that “an intellect which at a certain moment would know all forces that set nature in motion, and all positions of all items of which nature is composed... for such an intellect nothing would be uncertain and the future just like the past would be present before its eyes.”7 The human brain does intelligence pretty well by using approximations (like analogies!). However, the brain has a teensy little bug in its programming—too much information given too quickly and the human brain simply cannot handle Credit: Daniel Miessler © 2019, The Triple Helix, Inc. All rights reserved.

THE TRIPLE HELIX Winter 2019

29


it. Similarly, too much information given to a classical computer might crash the computer. So, an intelligence that can process a bunch of information at once would probably be slightly more suited to Laplace’s pipe dream. Laplace even acknowledges this, writing only “if this intellect were also vast enough to submit these data to analysis” would nothing be uncertain. A quantum computer, as discussed, is especially suited to processing a lot of information at once. Imagine an intelligence so great that any question about the past, present, or future could be answered at beck and call—such an omniscience would be God-like. Move aside, Siri. Unfortunately, due to limitations with qubit numbers and the current understanding of quantum algorithms, such a God-like intelligence is probably less science than it is fiction. Don’t discount the idea completely though: even if this superintelligence is unobtainable, the fantasy of such outlandish technologies may just motivate the next generation’s greatest thinkers to study quantum computing. Intelligence may beget intelligence. ■ Rory Frydman is a third-year at the University of Chicago majoring in Mathematics, and is involved in a quantum communications lab on campus. When he is not aiding in the research of new technologies, he can be found having existential crises over quantum immortality (don't ask) or eating at fancy Italian restaurants downtown—there is really no in-between. Occasionally, he can also be found singing in CMAC: The University of Chicago Glee Club or Chicago Chorale. References 1

30

Letter frequencies. https://www3.nd.edu/~busiforc/handouts/cryptography/letterfrequencies.html

2

Letter frequency. http://letterfrequency.org/

3

Aaronson, S. The Limits of Quantum Computers. Scientific American. 2008. http://www.cs.virginia.edu/~robins/The_Limits_of_Quantum_Computers.pdf

4

Tegmark, M. The importance of quantum decoherence in brain processes. Physical Review E. 2000. https://arxiv.org/abs/quant-ph/9907009

5

Stapp, H. Quantum Mechanics and Human Consciousness. Nour Foundation. https://www.youtube.com/watch?v=ZYPjXz1MVv0

6

Pradhan, R. K. Psychophysical Interpretation of Quantum Theory. NeuroQuantology. 2012. https://arxiv.org/pdf/1206.6095.pdf

7

Laplace, P. S. A Philosophical Essay on Probabilities. Translated into English from the original French 6th ed. by Truscott, F.W. and Emory, F.L. Dover Publications. 1951.

THE TRIPLE HELIX Winter 2019

© 2019, The Triple Helix, Inc. All rights reserved.


INQUIRY: RAYID GHANI Departments of Computer Science and Public Policy

Yueran Qi

T

Credit: SIGIR

he young Rayid Ghani loved two things: exploring computer systems and improving the world around him. When in college, Professor Ghani enjoyed the intellectual challenge of building A.I. in computer science classes. He utilized his time outside of class to do volunteer work. After graduating from college, he joined Accenture Labs, where he applied computer science techniques to problems arising in industries including retail and manufacturing. Once again, he spent much time outside the office volunteering at non-profit organizations. He wished to integrate his two interests so that the systems he built would have the same positive impact on the world as his volunteering. President Obama’s 2012 campaign proved to be an opportunity for just that. As the Chief Scientist for “Obama for America 2012,” he used A.I. to help the campaign target voters and accumulate funds.

© 2019, The Triple Helix, Inc. All rights reserved.

THE TRIPLE HELIX Winter 2019

31


“The A.I. itself doesn’t raise funds or mobilize voters,” Professor Ghani clarifies. "The A.I. itself “That was the human’s work.” There is, doesn't raise however, one thing the A.I. specializes in—spotting patterns. Professor Ghani’s funds or mobilize work helped the campaign to connect with voters. That was the human's work." people who could be potential fundraisers, donors, volunteers, or voters. Using voter registration data, including but not limited to age, region, preferred social media platforms, voting history, and previous donations, the A.I. could identify trends in voters’ past behaviors, as well as the similarities that ran in certain groups. For example, if a voter has voted for the same party in past elections, the A.I. could predict if this voter might be a potential volunteer or donor. If someone had subscribed to mailing lists promoting multiple candidates, the A.I. would identify the email as a platform to post advertisements. In these ways, the A.I. was able to connect Obama’s team to possible supporters. Currently, Professor Ghani works as Director of the Center for Data Science and Public Policy at the University of Chicago. He lectures as a Research Associate Professor at the Department of Computer Science and as a Senior Fellow at the Harris School of Public Policy. In his work on campus, he is able to bring his two favorite fields together. This marriage is in large part thanks to his experience working on President Obama’s 2012 campaign. In that capacity, he worked with government agencies and non-profit organizations and built A.I. to tackle these problems. The A.I. of Professor Ghani’s current projects run on mechanisms of pattern recognition to those he used in the campaign. Now he uses the technology to predict children who may be at risk of lead poisoning, detect potential high school dropouts, and identify violent police officers. His project to prevent lead poisoning utilizes an A.I. to spot the differences and similarities in two groups of children: victims of lead poisoning and children who had not been poisoned. The A.I. can find shared traits in the victims’ environments, as well as the differences between the surroundings of the victims and of the children who did not suffer from lead poisoning. It then points out the factors that may have caused lead poisoning and directs engineers to neighborhoods where such factors may be prevalent.

People's experiences may introduce bias to the output of a system.

“People can do that too,” says Professor Ghani, “but people can only do that with limited amounts of data, and they do that based on their experience.” People’s experiences may introduce bias to the output of a system. Although a computer can never be prejudiced on its own, humans label and input data, which may induce the system to find a biased pattern. If, for example, a scientist creates an A.I. designed to label pictures of cups or pens as “cup” or “pen,” they will need to input pictures already labeled as “cup” or “pen” so that the A.I., when receiving a new picture, can label it correctly based on its similarities to previously inputted pictures. Then, if the scientist considers large cups as bowls instead of cups and hence labels them incorrectly, the A.I. will extract the pattern and continue to label large cups as bowls. Though the A.I.’s labels reflect the data, the bias in the data still contributes to bias in the prediction from the system. To counter such human-produced prejudice, Professor Ghani created Aequitas, a tool 32

THE TRIPLE HELIX Winter 2019

© 2019, The Triple Helix, Inc. All rights reserved.


that helps A.I. designers locate bias in their systems. Aequitas compares the predictions a system made and the real outcomes. It then calculates the percentages of errors the system has made when it was predicting for different groups and warns its user if errors occur more within one specific group. Designers, upon receiving the warning, can adjust their system so that the error rates of different groups are balanced. When discussing these projects, Professor Ghani makes it clear that A.I. in public policy can never be separated from human effort. When asked if A.I. may, at some Credit: Alex Nabaum, ScienceNews point in the future, start to make decisions concerning public policy on its own, Professor Ghani replies that such a future remains far away. “Computers today solve very specific problems,” he explains. Hence, they free humans from decision making in industries where a vast number of small decisions must be made very quickly. Google and Amazon, for instance, rely on artificial intelligence to personalize advertisements to users. Professor Ghani believes that “the needs of society are: what can reduce homelessness, or how can we improve healthcare for people, or which students need extra help to graduate from high school. Those are the things a computer cannot automate.” Solving such problems relies on our decision to make an impact, our will to help people, and our resources and supplies. The A.I. may help us make better use of our energies and resources, but our will and power to solve social problems must be present before the A.I. Professor Ghani is working to make A.I. more accessible to organizations and government agencies committed to solving social problems. He has spent the last few months working on Solveforgood.org, an online platform connecting organizations wishing to solve social problems, and computer scientists wishing to gear their systems towards social good. NGOs and governments in the U.S. will post projects on the website, attracting programmers. Programmers will then share their systems and collaborate with one another. Right now, the website has completed its testing phase. It will soon become a bridge between computer scientists and organizations aimed at social good. Professor Ghani had united computer science and public policy, systems and people, and A.I. and efforts in his previous and ongoing projects. He has created systems to help solve social problems, is building communication between the fields of computer science and public policy, and will continue to pursue the impact he has hoped for by introducing computer scientists to projects for social good, and organizations to the power of artificial intelligence. ■ Yueran Qi is a second-year at the University of Chicago, majoring in Creative Writing. Her academic interests lie on the intersection between science and art, especially in how creativity moves both fields. On campus, Yueran is a member of the Archery Club and occasionally involves herself with performances of University Theater. In her free time, Yueran enjoys writing science fiction, sketching figures, and contributing to a video game made by her and her friends. © 2019, The Triple Helix, Inc. All rights reserved.

THE TRIPLE HELIX Winter 2019

33


THE FUTURE OF WARFARE Making Military Policy with A.I. in Mind

S

Charlotte Pierce Scott

cience matters in politics, and politics matters in science. As the list of advances in A.I. grows every day, anyone thinking about public policy would be naive not to be thinking about what A.I. could mean for everything from economic regulation to foreign policy.

In this article, I will examine one of the applications of A.I. with the highest stakes: the U.S. military.1 Between putting the lives of soldiers and civilians on the line and placing a huge burden on the U.S. taxpayer, militaristic decisions, especially regarding A.I., have enormous consequences and thus deserve careful thought and consideration. The idea of using software in training programs is hardly new to the military. However, because classical, or frozen, software is unable to evolve, it eventually stops challenging the learner. For example, an online poker game could teach you how to play poker, but eventually you would learn all the strategies the software uses and beat the game. Because this software is “frozen”, it will not evolve as you do. In military training, this distinction between frozen and adaptive software matters immensely.2 Since the 1950s, the Department of Defense has invested in a variety of computer-based training methods, especially simulation technology.1 Simulation technology is a sophisticated technology that plays a crucial role in making military training more effective and efficient. Ultimately, though, it has the same problem as the online poker game––simulation technology can solely provide what was directly coded into it during its initial development. If A.I. were used in creating these simulation technologies, the technology would be able to adapt and evolve with respect to the user. Imagine if an adaptable simulation could be used to train fighter pilots. The more adaptable the simulation, the better-prepared a pilot could be for an unpredictable combat situation. Also, because an A.I. can evolve on its own, it would not have to be replaced at the same rate as frozen software based on either technological or military advances, which could ultimately help decrease some portion of military spending on technology.3

Credit: Staff Sgt. Alexandre Montes

34

THE TRIPLE HELIX Winter 2019

Of course, there are also negative implications for the integration of A.I. into military training. Because the military’s current verification and validation process for technology is set up for frozen software rather than A.I., the switchover to an entirely new system would likely be time consuming and costly, cancelling out the “efficient and cost effective” part of the pros column, even if in the long term those pros would still hold up. © 2019, The Triple Helix, Inc. All rights reserved.


From both a policy-based and scientific perspective, the above list of pros and cons are fairly easy to imagine, and could be easily implemented in the near future. It’s the longer term, less specific implications that are harder to grasp. These effects are absolutely crucial to wholly imagine the future of foreign policy. War is arguably the bedrock of foreign policy. Through war, different groups have hashed out territorial, religious, and ideological disputes for most of human history. And perhaps most importantly, it’s the drastic, tragic, and fatal last result that most other foreign policy strategies work desperately to avoid. This deterrence is crucial. War has a huge cost: human lives. I believe the magnitude of this cost is what keeps nations from going to war constantly. The pervasive “never again” mindset created the United Nations, the largest foreign policy body in history dedicated to global peace. Unfortunately, we know that the “never again” mindset never really holds up; there has always been another war. But I do think that this mindset is mobilizing in terms of foreign policy. It may have saved the U.S. from hundreds of conflicts that were circumvented by a desire to spare human lives. A.I.’s integration into the military complicates this already unstable equation. What are the stakes of war if we can wage it with technology rather than Credit: Humanizer humans? Will we as a society become more accepting of violence when it costs fewer of our own? Will we jump more quickly to violent conflict resolution rather than bureaucratic and painstakingly slow international policy if we think the stakes are low? What decisions will other countries make if they have access to the same technology? These questions have already been raised by the use of military technology in the wars in Iraq and Afghanistan. They will grow more relevant as more sophisticated technology comes into play. Depending on the degree of technological advancement, would we be willing to let A.I. essentially make foreign policy decisions for us? The debate between trying to create an objective standard free of human error versus situation by situation discretion is a tale as old as time in public policy. However, considering that A.I. actually could have the ability to make choices on its own, this debate becomes even more relevant. Is it ethical to let a machine determine its own target rather than letting a human choose? I’m not sure anyone knows the answer.2 The consequences I’ve outlined in this article are just the beginning of the potential ramifications of A.I.’s integration into the military. For example, questions of whether A.I.s should have their own set of rights could massively complicate this new mindset. A.I. is rapidly evolving, and its consequences for the military and foreign policy will be enormous. We all should keep these potentials in mind as we evaluate legislators’ decisions on foreign policy and tech company regulation. However far-fetched they may seem, I believe these consequences are far closer than we think. ■

References

Charlotte Scott is a third-year public policy major from Washington, D.C. She is involved in Peer Health Exchange and the Women’s Ensemble, and works in a sociology lab on campus.

1

Fletcher, J.D. “Education and Training Technology in the Military.” Science. http://science.sciencemag.org/content/323/5910/72. (2009)

2

Cummings, M.L. “Artificial Intelligence and the Future of Warfare.” International Security Department and US and the Americas Programme. https:// www.chathamhouse.org/sites/default/files/publications/research/2017-01-26-artificial-intelligence-future-warfare-cummings-final.pdf (2017)

3

Hambling D. “Why the U.S. Is Backing Killer Robots.” Popular Mechanics. https://www.popularmechanics.com/military/research/a23133118/ us-ai-robots-warfare/ (2018)

© 2019, The Triple Helix, Inc. All rights reserved.

THE TRIPLE HELIX Winter 2019

35


THE FUTURE OF WARFARE A.I. as an Existential Threat

M

Joshua O'Neil

were drilled to hide under their desks and pray in preparation for nuclear war. Crouching under flimsy wooden desks, his generation faced an existential threat. Today, humanity confronts a new era of warfare: the era of A.I. The critical question humanity now faces is not whether to start a global A.I. arms race, but how to prevent one from starting. y father and his elementary schoolmates

Big tech has already begun incorporating A.I. in our day-to-day lives with and without us noticing. Research advances have rapidly been made in artificial intelligence in recent years, especially within the field of machine learning, which involves teaching computers to recognize complex or subtle patterns in large quantities of data. A.I.-inspired products like Alexa and Google Home are already in our homes, answering all our questions and managing our schedules. Self-driving cars have 1951 civil defense pamphlet, instructing on how to respond finally gone from myth to reality, to a nuclear threat. Credit: The Oregon History Project blending in with human drivers on the road. Algorithms continue to beat us at chess and every other game we throw at them. Despite some hiccups throughout the course of their development, these technologies have already demonstrated many benefits for humanity. However, private companies and governments are actively researching the use of A.I.s in the battlefield.1 The words dystopian and A.I. have grown intertwined in the past decade. Discussion of A.I.’s dystopian hazards fluctuate between naivete and scare-mongering.2 Of course, your Tesla is not about to transmogrify into an unstoppable Decepticon from everyone's favorite Sunday morning cartoon, Transformers. Killer robots aren’t going to be at your door, but there are more likely dystopian outcomes; complacency is a deadly force. The idea that we don't need to think about these issues because humanity-threatening A.I. is decades or more away is mistaken and dangerous. The roadblocks humanity places on these emerging technologies will set the scale for destruction. Current technology allows a sergeant sitting in an air-conditioned military base outside of Tulsa, Oklahoma to control a fully armed drone soaring above Afghanistan. Emerging 36

THE TRIPLE HELIX Winter 2019

© 2019, The Triple Helix, Inc. All rights reserved.


technologies will make this sergeant obsolete. If you combine a small drone, a gun, a camera, and a facial recognition algorithm, these machines have the ability to fly over crowds, seeking particular faces and assassinating targets on sight. They quickly begin to outperform any human agent. Even more alarmingly, these drones may be able to swarm in vast flocks that communicate with each other, to dynamically shift their strategy in the heat of combat. That's the dystopian world imagined by Stuart Russel, an A.I. expert and professor of computer science of UC Berkeley. Russel produced the short film Slaughterbots in 2017 depicting this emerging technology.3 A.I. gives humans the ability to hear, see, adjust, and sense real-time strategies far better and faster than most humans can do on their own. In the timespan that one soldier could command a drone, an A.I. will be able to command squadrons of unmanned tanks, artillery, reconnaissance and supply vehicles.4 This threat led groups like The Future of Life Institute to write a letter cosigned by public intellectuals such as Noam Chomsky, Elon Musk, Stephen Hawking, and Still from Slaughterbots (2017). Credit: Common Dreams over one hundred A.I. experts to the United Nations last year, calling for a ban of autonomous weapons. This letter states that “lethal autonomous” tech is a “Pandora's box,” making it clear that this issue demands immediate attention.5 These 116 experts are calling for a complete ban on the use of A.I. in managing weaponry. Regrettably, both the U.S. and Russia have repeatedly blocked any attempt in the U.N. to legislate autonomous weapons research and scope of use.6 No country has been bold enough to declare that they plan to build fully autonomous weapons, but several major military powers have made it clear through their voting records that robotic and artificial intelligence are critical elements of their military's competitive strategy. The U.S. has begun flexing its military-industrial complex to fit the needs of commercial A.I., desiring to quickly integrate software-based solutions into the military. A Harvard Kennedy School study argues that A.I. has progressed to a point where it will ignite a new era of military technological advancements. “Even if all progress [in A.I. research] and development were to stop, we would still have five or 10 years of applied research.”7 The United States has made robotics and autonomy a centerpiece of its “Third Offset Strategy” to reinvigorate America's technological edge. China announced earlier this year that they will begin integrating A.I. into new military strategy, and most recently Russian President Vladimir Putin said that whoever leads in A.I. “will become the ruler of the world.”8 The parallels to the Cold War are painfully obvious. Yes, A.I. is powerful, but it's not necessarily an evil seed bent on taking over the world. Don't panic just yet. There is undoubtedly a strong A.I. component of the global arms race, considering how central cyber warfare is in the new scheme of things. Today's A.I. has developed through incremental innovations stretching back to the early Cold War period. Researchers all over the world stay locked in close © 2019, The Triple Helix, Inc. All rights reserved.

THE TRIPLE HELIX Winter 2019

37


Both the U.S. and Russia have repeatedly blocked any attempt in the U.N. to legislate autonomous weapons research and scope of use. rivalries in their efforts to endow machines with human-like powers of autonomy, self-awareness, and intelligence. One nation’s human ingenuity doesn't stop at national borders. If something especially clever and useful takes root in one nation, it's almost always going to be leaked, smuggled, stolen, copied, or independently invented elsewhere. So, it stands to reason that continued advances in A.I. will not give any single nation a strong military advantage over others. It's difficult to imagine one country launching some radically new A.I. superintelligence in a matter of weeks or months. A lopsided proliferation of militaristic A.I. tech between the world powers is even less of a concern. As militaries incorporate these technologies, they will begin facing moral questions of how and when these weapons should be deployed. Autonomous robots will undoubtedly increase efficiency within the military. In World War One, trains allowed armaments and men to be quickly shipped to the front, leading to one of the bloodiest wars in history. Imagine the efficiency of an A.I. war: an A.I. is a soldier that never sleeps, improves from its mistakes, and is tailor-made for its task. Some of the questions militaries face are the same ones of safety and efficacy the civilian world deals with concerning A.I. Self-driving cars and autonomous weapons both face the moral dilemma of when it is justified to kill a person. Whether autonomous weapons will ever be allowed to fire guns or other weapons without express human orders has yet to be decided. We can't stop the development of A.I., deep learning, the Internet of Things, or any other underlying technologies. But perhaps we can control their spread in weaponized contexts until the world community has a grasp of how to contain them. An international moratorium on further development of autonomous weapons—analogous to how we've controlled thermonuclear, chemical, and biological weapons—is precisely what the world needs now. ■

References

38

Joshua O'Neil is a second-year at the University of Chicago, majoring in physics and philosophy. On weekends he is an avid member of the Outdoors Club and the community outreach group Winning Words. Josh can be found in his free time writing sci-fi, performing stand up downtown, and trying to resurrect his dying houseplants.

1

Cummings, M. L. “Artificial Intelligence and the Future of Warfare.” Chatham House Royal Institue of International Affairs. https://www.chathamhouse. org/sites/default/files/publications/research/2017-01-26-artificial-intelligence-future-warfare-cummings-final.pdf (2017)

2

Simonite T. “A.I.Could Revolutionize War as Much as Nukes.” Wired. https://www.wired.com/story/ai-could-revolutionize-war-as-much-asnukes/ (2017)

3

Garcia, D. “Governing Lethal Autonomous Weapons Systems.” Ethics and International Affairs. https://www.ethicsandinternationalaffairs.org/2017/ governing-lethal-autonomous-weapon-systems/ (2017)

4

Singer, W. S. “In the Loop? Armed Robots and the Future of War.” Brookings Institution. https://www.brookings.edu/articles/in-the-loop-armedrobots-and-the-future-of-war/ (2009)

5

Tegmark, M.E. [An Open Letter] “Autonomous Weapons: An Open Letter From A.I.& Robotics Researchers”. Future of Life Institue. https:// futureoflife.org/open-letter-autonomous-weapons/ (2015)

6

Scharre, P. “The Lethal Autonomous Weapons Governmental Meeting (Part I: Coping with Rapid Change).” Just Security. https://www.justsecurity. org/46889/lethal-autonomous-weapons-governmental-meeting-part-i-coping-rapid-technological-change/ (2017)

7

Allen, G. “Artifical Intelligence and National Security.” Harvard Kennedy School. https://www.belfercenter.org/sites/default/files/files/publication/ AI%20NatSec%20-%20final.pdf (2017)

8

Kania, E.B. “Battlefield Singularity.” Center for a New American Security. https://www.cnas.org/publications/reports/battlefield-singularity-artificial-intelligence-military-revolution-and-chinas-future-military-power. (2017)

9

Roberts, C. “Killer Robots: Moral Concerns vs. Military Advantages.” Rand Corporation. https://www.rand.org/blog/2016/11/killer-robots-mo al-concerns-vs-military-advantages.html (2016)

THE TRIPLE HELIX Winter 2019

© 2019, The Triple Helix, Inc. All rights reserved.


INQUIRY: DR. NICOLETTE BRUNER Stevanovich Institute on the Formation of Knowledge

Annabella Archacki

Credit: Nicolette Bruner

D

r. Nicolette Bruner pulls back, palms flat out in front of her, and eyes me curiously from across the table. “Have you ever watched Octonauts?”

At the time, I had not. I quickly rectified this fact post-interview.

For the uninitiated, the Octonauts are a team of undersea explorers led by Captain Barnacles the Bear. His British accent is so soothing it verges on hypnotic. Other members of the team include Kwazii the Kitten, Peso the Penguin, Professor Inkling the Octopus, and Tunip the Vegimal (half-vegetable, half-animal). The Octonauts can do anything. They teach preschoolers such as Dr. Bruner’s three-year-old son valuable lessons about problem-solving, teamwork, and marine ecosystems. They also serve as a generation of children’s first exposure to the concept of non-human personhood, the subject of Dr. Bruner’s research. The Octonauts represent an anthropomorphizing tendency fundamental to human nature. Children especially have a penchant for personifying the animal and the inanimate. They seek out non-human persons, or “thing people,” and treat them © 2019, The Triple Helix, Inc. All rights reserved.

THE TRIPLE HELIX Winter 2019

39


with the same affection they would a pet or family member. On the other hand, Dr. Bruner explains, non-human persons can be “terrifyingly alien.” A shadow becomes a monster under the bed; a misshapen tree incarnates an evil spirit. This anthropomorphic tendency does not disappear when we grow up. It takes new forms. For instance, Dr. Bruner notes the fervent cultural interest in artificial intelligence as a form of essentially recreating ourselves. “We seem to be drawn to these portrayals of a non-human that approximates a human entity,” she suggests. “The need to put a face on something seems to be a powerful one.” She opines that the quintessential text on A.I. and personhood is Philip K. Dick’s Do Androids Dream of Electric Sheep?. Although she is also a fan of Blade Runner, the movie adaptation, the aspects of the story about The Octonauts. Credit: IMDb (image), Silvergate which she is most interested in writing did Media (characters) not make the cut from page to screen. For instance, the film neglects to include Mercerism, a religion based on the ritual use of a machine designed to stimulate empathy. Details such as this are especially interesting given that, as Dr. Bruner claims, “The distinction between the android and the non-android is one of empathy.” She intends to include a chapter on A.I. in her forthcoming book. Other non-human persons that tend to attract scholarly interest are legal entities. American legislature has described corporations as people since the Supreme Court passed the fourteenth amendment in 1885. However, the addition of a legal dimension fails to sterilize the bogeyman-like quality of the non-human person. Describing corporations as people can produce an uncomfortable awareness of the entities’ overwhelming power measured as a function of the individual. The “corporate person” strikes many of us as uncanny and monstrous. It is a contorted version of ourselves, an analogy taken to a frightening extreme. Moreover, labeling a corporation a person can feel agonizingly partisan. Some may recall the controversy surrounding Mitt Romney’s 2011 platitude that “corporations are people.”1 Then-DNC Chairwoman Debbie Wasserman Schultz called the comment “a shocking admission” indicative of “misplaced priorities.”1 Romney’s priorities aside, the language of corporate personhood may not be as conservative as Wasserman Schultz suggests. Initially, Dr. Bruner agreed with Wasserman Schultz’s moral disgust at the concept of corporate personhood. Over the course of law school, Dr. Bruner was surprised at how much her position changed. She explains that “if you see personhood as a set of rights and responsibilities instead of something that equates with ‘human’. . . it’s paradoxically actually quite liberating.” Legally granting corporations personhood rights can be a way of holding them accountable for their behavior, creating a more symbiotic relationship between legal bodies and the ecosystem of society. 40

THE TRIPLE HELIX Winter 2019

© 2019, The Triple Helix, Inc. All rights reserved.


Moreover, positive analogies involving personhood and political economy are not as partisan as one might imagine. Examples range from Marx’s collective worker to Smith’s invisible hand to Kantorowicz’s two bodies of the king.2, 3, 4 It seems that almost every form of legal entity has employed the metaphor of a supra-human person to serve its own ends. From Octonauts to statecraft, the drive to anthropomorphize reveals itself everywhere. Furthermore, the same logic that protects corporations as persons can be used to combat ecological catastrophe. Granting personhood rights to the real Peso the Penguin, as well as other endangered species, would obligate governments to protect them.4 As an inactive member of the bar with experience in environmental law, Dr. Bruner is uniquely equipped to understand this potential function of nonhuman personhood. Her interest began during her first year of law school at the University of Michigan, after a stint as a paralegal. “People say if you really love law school, [a career as a lawyer] is not supposed to be a very good fit for you,” she discloses. “But I loved law school.” While pursuing her J.D., Dr. Bruner worked in Quito, Ecuador on the Aguinda v. Chevron Texaco case litigating oil drilling in the Amazon. Due to its breadth, length, and damages, the case has been referred to as the world’s largest environmental lawsuit.5 Dr. Bruner describes her time in Quito as “frustrating, because I couldn’t do more.” One weekend, she visited the contaminated site in person. Her tone drags. “All I can say is I saw the damage, and the damage is bad.” Although the plaintiffs won, Chevron still refuses to pay. However, law school also led Dr. Bruner to positive revelations. She connected with “the capacity that law has to take words and to make them real, to create a system that we imagine, sign on to, [and] hold ourselves by.” From there, the jump to studying law and literature was natural. She stayed at the University of Michigan to complete her P.h.D. in English. There, she benefited from the tutelage of scholars such as James Boyd White, one of the founders of the field of law and literature. One of her favorite books to write about is The Jungle by Upton Sinclair, although she “wouldn’t necessarily recommend it for pleasure reading.” When pressed, she says that her favorite book to read is Jane Austen’s Persuasion. In her spare time, she embroiders and spends time with her family.

Dr. Bruner visiting the “tree that owns itself” in Athens, Georgia in 2015. Credit: Nicolette Bruner

© 2019, The Triple Helix, Inc. All rights reserved.

THE TRIPLE HELIX Winter 2019

41


Dr. Bruner channels the elastic energy of someone with a lot of ideas at the very beginning of their career. Maybe part of it is the momentum that comes with being busy. She has had a long week, she explains; she spent the weekend in Athens, Georgia “visiting a tree.” “It’s a tree that owns itself,” she explains. According to lore, some time between 1820 and 1832, the tree’s former owner lovingly bequeathed it possession of itself and all the land within a radius of 8 feet around it. The tale seems to be a fiction that gradually became a reality, arguably like the mechanism of nonhuman personhood itself. Although a newspaper invented the story, the surrounding community treats the tree as if it really were the landowner. The original has since died, meaning the current tree is actually the son of the tree that owns itself. It (he?) has inherited the rights of ownership. Dr. Bruner produces a plastic baggie filled with acorns. She spins a few out onto the table. The sons of the son of the tree that owns itself. Sometimes personifying the world around us is more than just human egoism, or a vestige of predator identification. Sometimes it is not violent or frightening. Sometimes it is just kind. ■ Native to Austin, Texas, Annabella Archacki is a third-year undergraduate at the University of Chicago, studying the History and Philosophy of Science. Her interests include sustainability, astrophysics, metaphors of embodiment, science fiction, and romance languages. She is a board member for the literary magazine Euphony Journal. In other capacities, she has worked as a web designer, waitress, Cyprus-based archaeological research assistant, steward of the Great Lakes, and assistant production designer for the film Love’s Labour’s Lost. References

42

1

Rucker, P. “Mitt Romney says ‘corporations are people.’” The Washington Post. (2011)

2

Marx, K. Capital, Volume I. Penguin Classics, 468. (1976)

3

Smith, A. The Wealth of Nations. Modern Library. (2000)

4

Kantorowicz, E. H. The King’s Two Bodies: A Study in Mediaeval Political Theology. Princeton UP. (1957)

5

White, T. I. “The Ethical Implications of Dolphin Intelligence: Dolphins as Non-human Persons.” 2012 AAS Annual Meeting. (2012)

6

Dhooje, L. “J. Aguinda v. Chevron Texaco: Discretionary Grounds for the non-recognition of Foreign Judgments for Environmental Injury in the United States.” Virginia Environmental Law Journal. (2010)

THE TRIPLE HELIX Winter 2019

© 2019, The Triple Helix, Inc. All rights reserved.


ARTIFICIAL MORALITY

A

Benjamin Lyo

future with intelligent robots is inevitable. We only need to look to the recent advancements in the field of artificial intelligence (A.I.) to realize that their existence is no longer a question of if, but when. We now have self-driving cars, voice assistants in our phones and homes, and apps that can tell us what breed our dog is.1

That said, the past is not necessarily a good indicator of the future. A.I. researchers know this all too well, historically having experienced long droughts in research funding.2 Nevertheless, the performance and scope of A.I. algorithms have exploded in the last decade. From IBM’s A.I. Jeopardy! victory in 2011 to Google Deepmind’s Go success in 2017, A.I. algorithms are learning to tackle environments that are increasingly open-ended and complex. Could these A.I.s learn to navigate the most complex game of all—real life? And if we were to succeed in realizing the holy grail of A.I. research, a level of sophistication known as “general artificial intelligence,” what would it even look like? The idea of a human invention possessing cognitive abilities has existed since antiquity: the Ancient Greeks told a myth of the intelligent robot Talos, tasked with safeguarding Europa from would-be kidnappers, and the fourth-century Chinese text Lie Ze describes an automaton so lifelike that it fooled a king into thinking it was real. In recent centuries, this idea has acquired an increasingly technological aspect, progressing in line with technological advancements in human history.3 A famous literary example is Mary Shelley’s classic Frankenstein, which can be understood as exploring the idea of an artificial being from the viewpoint of modern science. Written in 1818, the novel is subtitled “The Modern Prometheus,” hinting at the theme of empowerment through technology. Though the message of Frankenstein may be ageless, Shelley’s conception of Frankenstein’s monster is decidedly not. The advent of the computer in the twentieth century changed our expectations of the robot’s physical form: steam-powered, mechanical beings evolved into electric, silicon beings. The computer revolution also engendered significant changes to the role of technology in human society. Computers have become a household staple in the US, providing widespread access to a powerful technological tool. This transformation inspired many science fiction authors to view the automaton in a new social context. What if, like the computer, robots come to play a central part in the average Joe’s life? How long would it be before robots became integral to the fabric of society? In answer to these questions, enslavement is a © 2019, The Triple Helix, Inc. All rights reserved.

Vase depicting Talos, an intelligent robot of Ancient Greek myth. Credit: Wikimedia THE TRIPLE HELIX Winter 2019

43


common trope, brought to life by films such as 2001: A Space Odyssey, The Matrix and The Terminator. Optimistic hypotheticals like Her and WALL-E are few and far between. The tone of anticipating robots is largely one of apprehension. Perhaps this wariness stems from evil robots making for a better story, but the anxiety is also evident in current public discourse on A.I. policy. Prominent voices in industry and academia including Elon Musk, Bill Gates, and Stephen Hawking have voiced concern over a possible A.I. doomsday scenario. Consider a hypothetical A.I. that is able to teach itself at the level of a human. Through self-modification, it may teach itself to learn faster, and, unhindered by biological constraints, may soon overtake human capability at an exponential rate. Mix self-replication with an undefined sense of morality, and you get a recipe for human extinction.4 As Stephen Hawking summarizes, “success in creating A.I. would be the biggest event in human history. Unfortunately, it might also be the last, unless we learn how to avoid the risks.”5 Of course, A.I. is not all doom and gloom. Intelligent robots could liberate us from menial labor and perform tasks considered too dangerous for humans. They could perform housework, assist the sick and elderly, and even facilitate space exploration. With the full automation of most industries, human society would be free to turn its attention to the arts and sciences. Culture could flourish alongside our understanding of the universe. Naturally, society would have to rethink its relationship with the concept of work. What would unemployment mean in a world with almost free and unlimited labor? The creation of jobs would no longer be exploited as political currency, and an individual’s worth in society would be divorced from their economic output.6 Already, the threat of automation of entire industries are pushing think tanks and smaller governments to consider previously ridiculed ideas, such as Universal Basic Income.7 The potential economic and military applications of A.I., together with the low resource threshold for conducting A.I. research, have dramatically stiffened global competition.8 Every nation is determined to win what is perhaps the most important technological race in human history; doomsday predictions cannot stop the A.I. gold rush.9 With this unstoppable arms race in mind, we must construct the vision of the future we wish to strive toward. How should intelligent robots of the future behave and what can we do about it? In other words, what should the nature of artificial morality be, such that we benefit and not suffer from existence of intelligent robots? In this thought experiment, I will make several assumptions. First, I will use “intelligence” in a qualitative sense, i.e. a human is more intelligent than a dog, as we have no standardized means of quantifying intelligence. Second, I will assume that we have the power to restrict the behavior of the robot in any way we want.

In other words, what should the nature of artificial morality be, such that we benefit and not suffer from existence of intelligent robots? 44

THE TRIPLE HELIX Winter 2019

With these assumptions in mind, let us examine how we might constrict robot behavior. Let us begin with Isaac Asimov’s “Three Laws of Robotics”: (1) A robot may not injure a human being or, through inaction, allow a human being to come to harm. (2) A robot must obey the orders © 2019, The Triple Helix, Inc. All rights reserved.


given to it by human beings except where such orders would conflict with the First Law. (3) A robot must protect its own existence as long as such protection does not conflict with the First or Second Laws. In The Rest of the Robots, Asimov writes, “one of the stock plots of science fiction was... robots were created and destroyed their creator. Knowledge has its dangers, yes, but is the response to be a retreat from knowledge?”10 His response to this trope was to write “my own story of a sympathetic and noble robot” who had the best intentions for its human masters in mind.11 These laws, unfortunately, are not watertight. Many loopholes have been found since publication of the laws in 1942. One of the most common involves tricking a robot into harming another human being. Other authors have tried to provide solutions to these technicalities by constructing a more comAsimov's The Rest of the Robots. Credit: prehensive set of rules.12 Asimov himself modified Flickr the wording and expanded the scope of his laws by appending a Zeroth Law, which reads: “A robot may not harm humanity, or, by inaction, allow humanity to come to harm.” In Karl Schroeder’s Lockstep, a character mentions that robots “probably had multiple layers of programming to keep [them] from harming anybody. Not three laws, but twenty or thirty.”13 This begs the question: how many laws is enough? Regardless of an exact number, controlling an intelligent robot’s behavior by establishing rules can be self-defeating. Of course, rules have their place. For certain actions like intentional homicide, hardcoded rules should certainly be implemented as a safeguard. Why not then for all other actions too? If we want to prioritize safety, it seems it would be in our best interests to have a fully deterministic robot whose behavior we can fully predict. But how would this be any different from a modern day computer? The essence of an intelligent robot lies in its ability to think and make decisions for itself. A widely accepted definition of intelligence states that it “reflects a… capability for comprehending our surroundings—“catching on,” “making sense” of things, or “figuring out” what to do.14 Thus, to be considered intelligent, a robot must necessarily have agency­—the capacity to form its own decisions in novel environments. Restrict the ability to do just that, even in a limited manner, and you undermine the very premise of intelligence that makes intelligent robots what they are. Within the framework of rules, there is a direct trade-off between human control and robot agency. Increase one, and the other decreases. Would this still hold true, however, if we were to step outside of this framework? Imagine a robot that—without the use of action-limiting rules—could exist alongside us without posing a threat to our safety. The only way this could happen is if the robot naturally desires the things that we also desire—or, more specifically, if the robot’s natural disposition is to act altruistically towards us. This outcome would the best of both worlds, but at the same time, it is also a much more difficult problem. To exhibit this kind of behavior, these robots must not only © 2019, The Triple Helix, Inc. All rights reserved.

THE TRIPLE HELIX Winter 2019

45


know what it means to be moral, but also choose—every time—to act in the least harmful way. How would we go about actually implementing such a mechanism? Let us consider human morality as a starting point. Morality is not static: our standards for behavior change all the time, and what was considered appropriate many centuries ago is no longer acceptable today. Further, morality is not universal. People of the same culture, faith, nation, gender, age, or any other identifying subgroup may share certain fundamental ideas of what they consider as “moral,” but each individual also forms his/her own combination of ideas. Our sense of morality is affected by our upbringing as well as by the communities we engage with every day. In conclusion, morality is too fluid to be reliably defined. This fluidity puts us in a bit of a pickle. How can we determine morality for robots if we cannot even define our own collective morality? The answer, I think, is that we should not. Since we cannot teach robots any universal truths, they must come to learn these truths by themselves. Much like a young child adopts the moral compass of its surrounding community, robots too must do the same. To learn what they “ought to do”, they should integrate into a society and observe its rituals and morals. We can make robots select the best of our values through hard-coding their internal disposition. Unlike hard-coding external behavior, which forcefully sets boundaries on robot thoughts and actions, hard-coded disposition would incentivize robots to think and behave in a certain way. In this sense, robots would be analogous to pet dogs, though much smarter and more capable. They would feel something akin to pain if they make a suboptimal decision, and experience joy if they make the right ones. They would be self-regulating, since their happiness is predicated on the well-being of their master and the surrounding community. To make sure that these robots learn properly, we would have to treat them like a child or a pet. A fear of robots would have to be replaced by a willingness to experiment. The behavior of these robots would be dependent on us showing them that humans are capable of great kindness, despite all our flaws. Just like with other humans, we should treat them how we would like to be treated. ■ Benjamin Lyo recently graduated from the University of Chicago, where he majored in Physics. He is interested in analyzing how technology can be used to amend societal injustices and in developing a quantitative understanding of the workings of the mind. Other than writing, he enjoys rowing, playing violin, and composing. References 1 Shea, E. (2016, February 23). New App Uses Artificial Intelligence To Identify Dogs By Breed – American Kennel Club. Retrieved December 15, 2018, from https://www.akc.org/expert-advice/news/app-identifies-dogs-by-breed/ 2

Crevier, D. (1993). AI: The Tumultuous Search for Artificial Intelligence. New York, NY: BasicBooks.

3

Kang, M. (2011). Sublime Dreams of Living Machines: The Automaton in the European Imagination. Cambridge, MA: Harvard University Press.

4

Bostrom, N. (2017). Superintelligence: Paths, Dangers, Strategies. Oxford: Oxford University Press.

5

Hawking, S. et al. (2017, October 23). Stephen Hawking: 'Are we taking Artificial Intelligence seriously.' Retrieved December 15, 2018, from https://www. independent.co.uk/news/science/stephen-hawking-transcendence-looks-at-the-implications-of-artificial-intelligence-but-are-we-taking-9313474.html

6

Lafargue, P. (1883). The Right To Be Lazy. Chicago, IL: Charles Kerr and Co., Co-operative.

7

Yang, A. (2018). The War on Normal People: The Truth About America's Disappearing Jobs and Why Universal Basic Income is our Future. New York, NY: Hachette Books.

8

Horowitz, M. C. (2018, September 12). World War AI. Retrieved December 15, 2018, from https://foreignpolicy.com/2018/09/12/will-the-unitedstates-lose-the-artificial-intelligence-arms-race/

9

Allen, G., & Chan, T. (2017). Artificial Intelligence and National Security. Cambridge, MA: Belfer Center for Science and International Affairs. Asimov, I. (1964). The Rest of the Robots. New York, NY: Doubleday.

10 11

Asimov, I. (1979). In Memory Yet Green. New York, NY: Doubleday. Lyuben Dilov’s Icarus’s Way adds an additional law: “A robot must establish its identity as a robot in all cases.” Nikola Kesarovski’s The Fifth Law of Robotics adds “A robot must know it is a robot.”

12

Schroeder, K. (2014). Lockstep. New York, NY: Tor Books.

13

Gottfredson, L. S. (1997). Mainstream Science on Intelligence: An Editorial with 52 Signatories, History, and Bibliography.

14

Stephenson, N. (2011). Innovation Starvation. World Policy Journal, 28(3), 11-16. Retrieved from http://www.jstor.org/stable/41479281

15

46

THE TRIPLE HELIX Winter 2019

© 2019, The Triple Helix, Inc. All rights reserved.


The Triple Helix International Leadership

The Triple Helix, Inc. is an undergraduate, student-run organization dedicated to the promotion of interdisciplinary discussion. We encourage critical analysis of legally and socially important issues in science and promote the exchange of ideas. Our flagship publication, The Science in Society Review, and our online blog provide research-based perspectives on pertinent scientific issues facing society today. Our students in twenty chapters at some of the most renowned universities in the world form a diverse, intellectual, and global society. We aim to inspire scientific curiosity and discovery, encouraging undergraduates to explore interdisciplinary careers that push traditional professional boundaries. In doing so, we hope to create global citizen scientists. www.thetriplehelix.uchicago.edu


© 2019, The Triple Helix, Inc. All rights reserved.

THE TRIPLE HELIX Winter 2019


Turn static files into dynamic content formats.

Create a flipbook
Issuu converts static files into: digital portfolios, online yearbooks, online catalogs, digital photo albums and more. Sign up and create your flipbook.