12 minute read
When AI Eclipses Humanity
from April 2023
THE SINGULARITY— the time when super intelligent computers surpass human understanding and shed human control— may soon be upon us
By Ed McKinley
Advertisement
uppose a medical research team asks a chatbot to develop a vaccine to eradicate every variant of COVID19 in humans. It’s a perfectly reasonable request that could go shockingly wrong.
The machine might formulate a drug that renders recipients infertile, thus reducing the population to zero and eliminating the virus. That perfectly logical but chillingly cold solution achieves the goal but at the cost of pushing our species to the brink of extinction.
Perhaps the example seems extreme, but it’s far from absurd.
“This is exactly how a pure optimization process solves problems,” warns Roman Yampolskiy, a University of Louisville professor of computer science who’s written extensively on the subject. “People can fix that, but there are infinite similar possibilities.”
What’s more, the smarter AI gets the more dangerous it becomes, Yampolskiy says.
Knowing that, how concerned is he about the threat inherent in artificial intelligence? “I’ve devoted my life to it,” Yampolskiy tells Luckbox in a flat tone of voice. “I don’t see anything more important.”
But his life’s pursuit must get lonely. Despite doomsday warnings from generations of artists, mathematicians, engineers and entrepreneurs (see sidebar “You’ve Been Warned: The Dangers of AI”), hardly anyone seems willing to stand in the way of the explosive expansion of AI.
Of the hundreds of thousands of AI researchers in the world, perhaps 100 work full time on AI safety
with another 200 or so delving into related areas such as ethics or algorithmic justice, Yampolskiy notes. “I’m guessing here, but I don’t think it’s much bigger than that,” he says of his estimates.
Moreover, many of the scientists devoted to AI safety aren’t ensconced in academia—instead they’re working for big public companies like Alphabet (GOOGL), which owns the DeepMind computer labs, and smaller ones like privately held OpenAI, which produces the ChatGPT chatbot that’s making headlines daily.
Public or private, companies have a vested interest in developing and selling AI and don’t want to forfeit competitive advantage by slowing the technology’s progress, Yampolskiy notes.
Whatever their motivations—financial or scientific—researchers tend not to consider the worstcase result of AI, he maintains. He calls it “the possibility of impossibility.” It’s the idea that no matter what scientists do they can’t stop AI from wreaking havoc on humankind.
Mounting danger
Artificial intelligence has been with us for some time now, beginning perhaps in 1935 with a paper Alan Turing wrote to describe a machine with memory, computing power and the ability to scan symbols.
AI has apparently reached the latter part of the first of three stages. It’s now AI, which can duplicate
human thought processes. Soon, it may enter the phase called AGI for artificial general intelligence, where it can equal human mental capacity. After that comes the singularity—artificial super intelligence or ASI—where machines become so smart that humans can’t control them.
Computers in the AI phase work out problems and serve up information with blinding speed. They may beat a human at chess, but they can’t carry on a convincingly human conversation.
Even in this current AI phase, computers pose
dangers. ChatGPT, for example, is error prone. Don’t believe everything it tells you, Yampolskiy advises. It also plagiarizes when it’s not making up stuff that’s often convincing but sometimes clumsy.
Even in this somewhat crude but tantalizingly human-like state, AI threatens to put people out of work by taking over creative functions recently believed exclusive to humans. They include producing images, writing poetry and engaging in interesting if somewhat dull and fake conversation.
But the foibles and danger of smart machines don’t end there.
Artificial general intelligence
As AI improves, it approaches AGI status. That’s where computers become the intellectual equal of humans. Anything a human can conceive, the machine can conceive, too.
Computers haven’t achieved that state but might in an extremely short time, perhaps in just the next few years.
As they become smarter, they also become more dangerous and more unpredictable. Programmers don’t know how the machines will react to instructions or whether they’ll become erratic or malicious.
“No one in the world claims they know how to
control their systems—how to guarantee their safety,” says Yampolskiy.
Yet machines won’t stop gaining intellectual power when they reach parity with humans. Instead, AGI will continue to add brain power until it becomes far superior to human intellect. Eventually, probably in this century, computers will far exceed human intellectual capacity.
The singularity
The era when AGI outstrips human thinking and becomes ASI is called the singularity. It doesn’t seem likely to give rise to Terminator-like super humans. But exactly what it will bring remains totally unknown
“That’s the scary part,” Yampolskiy asserts. What’s perhaps even more frightening is the fact that after the singularity a “pretty tight” mathematical proof supports the idea that a lower-level intelligence (humans) cannot indefinitely control higher-level intelligence (machines).
“You cannot provide a meaningful explanation for something with a trillion parameters,” Yampolskiy says. “Or if you can, then you can’t comprehend that explanation because the explanation is the model itself.”
ASI machines will use social engineering to trick people with deep fake images, video and audio. In one example, a computer could call you on the phone and convincingly mimic the voice of your boss or spouse, asking you to remind it of a password.
“We know social engineering attacks work on trained professionals,” Yampolskiy observes.
But despite the danger, the promise of free labor, both physical and cognitive, compels companies to develop super intelligent computers, he says. In fact, an arms race is already underway as companies vie for the next breakthrough, a headlong plunge he views as “dangerous” and “unethical.”
“They’re running an experiment on 8 billion people, and I don’t think any of us consented to that,” Yampolskiy says of the current proliferation of chatbots. “And the CEO of OpenAI [Sam Altman] says it’s either going to be really good, or we’re all going to die. And that’s the business plan.”
So, analyzing the peril of the singularity falls to science fiction writers instead of scientists, he laments.
“If we were smarter, we would totally put a moratorium in place until we figured out how to do it safely— if possible,” Yampolskiy suggests. “But because of economic incentives, that’s not going to happen. It’s not just large corporations—it’s the countries. If the U.S. doesn’t do it, Russia or China will do it.”
But the AI community has at times attempted to maintain control.
Managing super intelligence (or not)
A decade ago, researchers concerned about the danger posed by ASI were proposing “confinement” or “boxing,” two terms for keeping it from getting free rein to do as it pleases. But hope for that has faded.
“The consensus is that boxing is impossible,” Yampolskiy says. “You cannot contain that system long-term. It may buy you a little bit of time, but everyone agrees the system will leak out.”
Keeping superior intelligence under wraps would require cutting off all contact with humans because the machine could bribe, threaten or simply outsmart its inferior captors and then make its escape, he contends.
Even proposals to limit AI to answering questions wouldn’t work in the long run because the machine could sneak additional information into the process to manipulate human operators, Yampolskiy says.
Besides, AI that’s fully contained wouldn’t transfer information to people and would therefore be of no use. “Why even do it?” Yampolskiy asks. “It’s a dead-end approach.”
Containing AI wouldn’t be the first task scientists have given up as hopeless, he noted, citing the case of perpetual-motion machines. Almost all researchers and inventors have ceased to experiment with creating a mechanism that defies gravity and friction to run forever. Alchemists aren’t trying to turn base metals into gold anymore, either.
Does that mean it’s too late to get control of AI? Not necessarily. Perhaps the scientific consensus is wrong, and AGI, ASI and the singularity will never happen. But Yampolskiy cautions that once researchers realize AGI has come into being, it’s already too late to stop it.
Optimists argue that developers could program ethics into AGI or ASI. But programmers aren’t very good at creating good software in general and intelligent software in particular, Yampolskiy contends. “I don’t think you can provide enough hard-coded rules for all possible dangerous situations,” he cautions.
To make matters worse, hackers penetrate chatbot systems like OpenGPT and command it to bypass the filters meant to guard against racism, sexism and other ills. “They get them to reverse those rules, so I don’t think it’s sustainable longterm against any committed adversary.”
Plus, as AI advances beyond human understand -
ing, people will have no idea how it will react to commands, he says. It could even turn against its creators.
According to a scenario referred to as the “treacherous turn,” a computer system that’s still not very
powerful could follow orders and pretend to be benign while it’s getting access to more resources. “At some point it goes, ‘OK, I don’t need you anymore,’” and becomes malevolent, Yampolskiy suggests.
So, who can protect us?
Government can’t help
State and federal legislators can make it a crime to write destructive AI, but Yampolskiy doubts that would help. “Viruses are illegal. They made spam illegal. Did that change anything?” he asks. At the United Nations, superpowers blocked restrictions that would have outlawed using AI for warfare, he notes.
Yet, individuals aren’t completely powerless to slow the march of AI toward oblivion for humans, he concedes. It begins with skepticism toward tools the AI community provides.
Don’t blindly accept what machines tell you—do your own research to verify the results of a search or the prose a chatbot turns out, Yampolskiy advises.
“Most people, just by the nature of their education, don’t understand that it’s not some authoritative source from Microsoft with lots of verified citations,” he says. “Some results it is giving them are misinformation. This is literally a BS generator.”
In his view, artificial intelligence will improve, but probably not enough.
“It will get smarter,” he acknowledges, “but you still have to verify. Let’s say right now it gives you 50% bullshit. In the future, it’s only 5% bullshit, but it’s still a lot.”
You’ve Been Warned: The Dangers of AI
STEAM LOCOMOTIVES AND THE TELEGRAPH WERE THE LATEST IN TECHNOLOGY WHEN CELEBRATED THINKERS BEGAN WARNING THAT MACHINES WOULD SOMEDAY OUTSMART HUMANS. WITH TIME, THEIR PREDICTIONS HAVE ONLY BECOME MORE DIRE. HERE’S WHAT A FEW ARTISTS, SCIENTISTS AND ENTREPRENEURS HAVE HAD TO SAY:
1863 / “ ... the time will come when the machines will hold the real supremacy over the world and its inhabitants. Day by day, we are becoming more subservient to them; more men are daily bound down as slaves to tend them, more men are daily devoting the energies of their whole lives to the development of mechanical life.”
1948 / “I believe that the abominable deterioration of ethical standards stems primarily from the mechanization and depersonalization of our lives, a disastrous byproduct of science and technology. Nostra culpa!” -Physicist Albert Einstein
1951 / “It seems probable that once the machine thinking method had started, it would not take long to outstrip our feeble powers. They would be able to converse with each other to sharpen their wits. At some stage, therefore, we should have to expect the machines to take control. — Computer pioneer Alan Turing
1965 / “ ... an ultraintelligent machine could design even better machines; there would then unquestionably be an ‘intelligence explosion,’ and the intelligence of man would be left far behind.”
— Mathematician I.J. Good, originator of the term “technological singularity”
1976 / “There are some acts of thought that ought to be attempted only by humans ... I had not realized ... that extremely short exposures to a relatively simple computer program could induce powerful delusional thinking in quite normal people.”
— AI pioneer Joseph Weizenbaum
2014 / “The development of full artificial intelligence could spell the end of the human race. It would take off on its own and redesign itself at an ever-increasing rate. Humans, who are limited by slow biological evolution, couldn’t compete and would be superseded.”
— Cosmologist Stephen Hawking
2014 / “We need to be super careful with AI. Potentially more dangerous than nukes.”
Tesla and SpaceX CEO Elon Musk
2019 / “The world hasn’t had that many technologies that are both promising and dangerous the way AI is.”
Microsoft co-founder Bill Gates
2022 / “Bad AI may kill us all in 50 years, but the bulk of the harm of such an extinction event comes from the trillions of future humans that will never have a chance to be born in the billions of years that follow.”
— Ethereum founder Vitalik Buterin
2023 / “I worry that the technology will learn how to influence human users, sometimes persuading them to act in destructive and harmful ways, and perhaps eventually grow capable of carrying out its own dangerous acts.”
— New York Times columnist Kevin Roose
2023 / “The bad case— and I think this is important to say—is, like, lights out for all of us.”
— OpenAI CEO Sam Altman