The Most Important Part of Ethical AI Is Not the Machine by Paul Vanags
“There are no secrets about the world of nature. There are secrets about the thoughts and intentions of men.” (J. Robert Oppenheimer) “And he is reminded, that Dr Oppenheimer’s optimism fell, at the first hurdle.” (Billy Bragg)
Amongst all of the wonderful and terrifying things that are happening in our world at the moment, something that ticks both of those boxes must surely be the rise of Artificial Intelligence (AI). Already ubiquitous, often invisible, it is predicted to be one of the major forces shaping our humanity over the coming decades. Some suggest that our future is now so irrevocably intertwined with technology that we will evolve to become part-human part-machine hybrids (so called trans-humanism). Others go even further to say that once machine capabilities surpass their human creators, we may become subservient to them and be kept alive merely for our role in maintaining a hospitable eco-system. Even if some of these speculative sci-fi notions do not materialise, it is clear that AI is here and here to stay. Most of us already use and accept it in our daily lives, from purchasing recommendations, to in-car navigation systems and our social media feeds. For most of us, these applications are at best genuinely helpful and at worst slightly benign. How we as humans react to and interact with these new technologies will be critically important in determining how they play out in society as a whole.
We all know people who are first on the technological bandwagon. These are the folks who have wired up their home speakers to interact with their central heating system, which automatically recognises who’s at their front door and orders them more baked beans via the webcam in their cupboard, or something like that. Many of us will also know one or two tin-foil hat wearing, “Facebook is run by the CIA, and my home network uses Linux” types too. But these are extremes and most human psychological reactions will be less dramatic. The psychology of human interaction with AI will be an important part of how we interact with and implement these new technologies. Ethical AI policy group (Ada Lovelace Institute) lists “building collective social capacity” as one of the key factors in the latest thinking in what will make effective global AI governance. Central to this must be developing an understanding of the human psychology of reactions to the different manifestations of AI. This will allow us to make for healthy working relationships between ourselves and the machines for the good of society. Already, I think we are seeing a number of interesting human responses to AI, some of which have the potential to be
8