12 minute read
Artificial intelligence: Friend or foe for building a better future?
Andrew Maynard
In 2015, Elon Musk, Bill Gates and the late Stephen Hawking were rather incongruously nominated for the 2015 Luddite Award — an honor bestowed by the Information Technology & Information Foundation for “The Worst of the Year’s Worst Innovation Killers.”
Musk, Gates and Hawking — along with many others — had expressed growing concerns over the potential risks of naïve and irresponsible developments in artificial intelligence, commonly referred to as AI. In spite of their collective technological optimism, the speed of recent advances had them running scared. Six years on, the debate over the potential risks of AI, and how to ensure its ethical and responsible development and use, is fiercer than ever — so much so that the White House has just committed to developing a “bill of rights” to guard against the inappropriate use of AI and similarly powerful tech.
Yet, as with many technology trends, the challenges and opportunities AI presents are more complex than they may at first seem.
From mythology to machine learning: A history of AI
Like many emerging technologies, artificial intelligence is shrouded in hype and speculation. It is almost inconceivable that we will be able to navigate an increasingly complex maze of emerging global threats without the help of AI. Yet, the potential risks and possible benefits are often elevated to nearmythical proportions. Either way, when the mystique is stripped away, it’s apparent that we are on the cusp of developing new AI-based technologies that could profoundly impact the future we are creating.
While it’s tempting to think of AI as a relatively recent development, its roots are buried deep in human history and mythology. As a species, we’ve long been intrigued with the relationship between the creator and the created and enamored by the possibility of transcending from the latter to the former.
This is a theme that threads through many of our origin stories, myths, religions and philosophies. It’s only in the last half century or so that we have begun to develop capabilities that tantalizingly open up the possibility of flipping our god-like aspirations from fantasy to reality. And in the Western science and technology tradition that grew out of the Enlightenment and was fueled by the Industrial Revolution and the Big Science of the 20th century, this aspiration encompasses replicating one particular aspect of ourselves: intelligence.
The computer scientist and codebreaker Alan Turing famously speculated about the possibility of machines one day exhibiting intelligence that is indistinguishable from a living person. But the technology he was working with at the time was only sufficient to hint at what might be possible.
Interest continued through the 1950s in the idea of machines that could problem-solve as well as or better than people. With the advent of neural networks — algorithms that simplistically reflect how researchers at the time believed the human brain works — there was growing excitement around the possibility of creating machines that mimicked human intelligence.
Early work on AI didn’t deliver on what many believed it promised. The field was re-energized in the 1990s as exponentiallyincreasing computer power and breakthroughs with layered neural networks led to the emergence of deep learning and similar technologies. These advances continue to form a foundation for current-day AI that is transforming almost every aspect of our lives, as increasingly powerful systems are designed that can solve problems, make decisions and carry out actions faster, more accurately and more efficiently than people.
Yet, to call these systems “artificial intelligence” is something of a misnomer. They represent a type of intelligence that is narrowly defined by an ability to solve problems in a particular way. This is an understanding of “intelligence” that is grounded in 200 years of advances in science, technology and engineering that reflect a largely Western-driven philosophy of progress. It’s a definition of intelligence that only captures a small slice of the richness and diversity of human intelligence. And yet, narrow as it is, the capabilities that are beginning to arise from its incorporation into AI have the capacity to radically alter the future we are heading toward — possibly more so than any previous technological breakthrough.
And this is what makes the responsible and ethical development and use of the technology so critical to ensuring a vibrant future — as long as we can learn to navigate the potential pitfalls.
The scourge of superintelligence
In 2014, the Oxford philosopher Nick Bostrom published the book “Superintelligence: Paths, Dangers, Strategies.” In it, he lays out the possible existential risks of an AI future where smart computers invent ever-smarter progeny, until we hit a point where we are not only surrounded by machines that are vastly more intelligent than us, but machines that recognize humanity as either irrelevant to their continued progress or an impediment to it.
This is a vision of an AI future that inspired Musk, Gates and others to raise the alarm over out-of-control AI. It’s also one that has led to massive investments in the development of safe and ethical artificial intelligence in recent years. Nonetheless, focusing on the threats of emergent superintelligence is in itself dangerous as it obscures more real and imminent risks associated with nearterm uses of AI. It also makes it harder to identify and follow pathways toward a future where AI is an asset rather than a risk. A particular challenge with superintelligence as an idea is that it’s grounded in concepts of intelligence, motivation and power that are more reflective of how its proponents think about and see the world than how AIbased technologies are actually progressing. Real-world AI is less about replicating and superseding human intelligence and is more concerned with developing machines that have the ability to achieve specific goals, such as the sophisticated use of data and inference.
It is this ability to find solutions to specific goals and problems faster and more efficiently than people can that is at the core of what makes AI so transformative. It’s also what underlies the real-world risks of developing the technology faster than we are able to consider and understand the potential consequences. Even though many of these risks don’t extend to the level of a threat to human existence — at least not yet — they are, nevertheless, serious.
Navigating the AI risk landscape
Buoyed along by concerns around existential risks, the past five years have seen a flurry of activity around AI ethics. Since 2016, there have been a growing number of academic papers and sets of principles on how to develop and use AI ethically, and for good reason. The more we trust machines to make decisions that affect our lives, the more we open ourselves to machines impacting people in ways that many would consider to cross ethical lines.
For instance, algorithms that are designed to determine the probability of someone being a felon or to assess someone’s trustworthiness or credit-worthiness raise serious questions around bias, justice and equity. Likewise, machines that filter job applicants, prioritize medical care or assess socially acceptable behavior all extend into areas that are deeply wrapped up in diverse understandings of right and wrong, of rights and values, of dignity and legitimacy. Delegating decision-making to machines in these and similar areas risks being seen as an abdication of responsibility and something that ultimately undermines trust. Equally worrying, if machines making decisions have been trained to reflect the perspectives, philosophies and biases of their creators, there is a risk of embedding biases into technological systems without the checks and balances that come from appropriate levels of transparency and accountability.
There are also risks of material harm associated with decisions that are delegated to machines. The company Tesla is currently grappling with this very issue as questions are raised over the safety of its AI-based autopilot system — questions that are becoming more pertinent as the company rolls out its more advanced Full Self-Driving system.
To make matters more complex, many of the risks associated with AI lie beyond conventional risk assessment and management frameworks — especially where they involve hard-to quantify but highly impactful social risks, such as threats to autonomy, dignity and identity.
It’s risks like these that the ASU Risk Innovation Nexus was established to address. In the Nexus, we use an approach to emerging challenges that recognizes risk as a threat to what is important to the developers and stakeholders of new technologies — from loss of life to loss of dignity, and everything in between.
This risk innovation approach to developing emerging technologies builds on providing pragmatic pathways to navigating “orphan risks” that are easy to overlook but nevertheless essential to ensuring responsible and beneficial innovation. It’s also an approach to risk that recognizes the dangers of not developing new technologies as we strive to build a better future. And this is where the possible risks of AI need to be balanced by the potential benefits of the technology.
Beneficial AI
Despite all of our collective knowledge, understanding and human capacity for problem-solving, we are living in a world in crisis. In this moment, we are grappling with a global pandemic, social injustice, escalating geopolitical tensions, growing mistrust in expertise, human driven climate change, technologies that outstrip our abilities to use them responsibly and a growing population that is placing increasing demands on limited planetary resources. These and many more challenges point toward us standing at a critical tipping point in human history. Never before have there been so many people with so much power and so little understanding, vying for so few resources, on a planet that is being pushed so far beyond its point of equilibrium.
Within this increasingly complex system, technology is both a problem creator and a problem solver. We cannot live without technology innovation, but neither can we thrive in a future dominated by technologies that are ill-considered and irresponsibly developed.
This is where AI has a critical role to play. In the worst case, AI has the potential to quickly destabilize social, political and technological systems. On the flip side, technologies that come under the broad umbrella of AI have the capacity to help us transcend the looming crises we face and build a more just, vibrant and sustainable future — if, that is, they are developed and used responsibly.
Naturally, such a future will depend on many factors beyond how we develop and use artificial intelligence. AI done well will enable us to extend our collective problemsolving skills in ways that are inaccessible without it. And this is where the technology is not only important for our collective global futures, but is also essential.
At a purely material level, AI has the capacity to open new discoveries through humanmachine partnerships. These span from the discovery of new materials, to new ways to combat disease and improve health and wellbeing, to advances in reprogramming DNA. And they potentially include novel approaches to sustainable energy generation and use, global supply chains, transportation and many other future-looking possibilities that are beyond the reach of human endeavors alone. But the power of AI as a problem-solving tool goes far beyond this. With access to massive datasets, powerful learning algorithms, and the right social, economic and political levers, there is no reason why AI cannot be used to address global social challenges.
This is where advances in AI become particularly interesting from a global futures perspective. As individuals and communities, we are manipulatable. We don’t like to admit it, but beyond our illusions of rationality, we are subject to a vast array of cognitive biases and unconscious behaviors. And anyone — or anything — who can understand how to make use of these has the ability to shape how people behave.
Of course, we know this — it’s the basis of marketing, of political campaigning, of social persuasion, and of how we negotiate and interact with others to achieve what we want. Yet manipulating people with precision is incredibly difficult to do. What if smart machines that understand our biases (and how to utilize them) could nudge us toward some futures, and away from others? Imagine if we could avoid a climate change catastrophe, or global pollution or social injustices, by partnering with machines that are adept at social manipulation. This is an opportunity that is fast coming within reach, but it’s also one that will force us to think critically about who’s part of deciding what the future looks like. And it’s an opportunity that places a searing spotlight on the tension between what we should do and what we can do with the technology we’re developing.
In this respect, Musk, Gates and Hawking were right to be concerned about AI. But they were also wrong about what we should be worried about. Beyond the potential loss of jobs, the erosion of agency, and the embedding of biases in the technologies we hand our future over to, perhaps one of the greatest threats of advanced AI is that it could be used to manipulatively impose one group’s vision of the future on the world at the expense of others. Or that it could learn how to use what makes us uniquely human to create the future we think we want, while robbing us of the future we need in order to thrive.
It’s precisely questions like this that are fueling a growing body of work on beneficial and responsible AI. Despite the risks, artificial intelligence is emerging as a technology that we cannot afford not to develop as we seek to build a more sustainable and promise-filled future for humanity and the planet we inhabit. Achieving this will require a level of creativity, innovation and vision that transcends conventional disciplinary silos and leverages every area of human expertise and experience — something that the Julie Ann Wrigley Global Futures Laboratory aspires to.
The irony, of course, is that to truly achieve this, we’re going to need help in the form of — you guessed it — AI!
Andrew Maynard is a scientist, an author, a communicator and an internationally recognized expert and thought-leader in emerging technologies and their socially responsible and ethical development and use. He currently serves as the associate dean of curricula and student success in the ASU College of Global Futures and is director of the ASU Risk Innovation Lab.