5 minute read
AI – a threat or reliable!
Discussing ChatGPT's ability to cheat on exams and take our jobs in the future is just skimming the surface. Instead, we should view AI technology in the same way as we do a nuclear threat.
Olle Häggström, Professor of Mathematical Statistics, is not alone in warning us about the end of humanity.
But far from everyone agrees on the risks.
The open letter that was written a few months ago attracted a lot of attention. Almost two thousand researchers and prominent names in the technology industry came out and spoke about the risks of artificial intelligence. The demand they made was a six-month pause in the training of the most powerful AI models.
When the GU journal spoke to Olle Häggström at the end of March, shortly after the letter had been published, he said that he did not hesitate in signing it.
– But of course a pause is not enough. The aim was to draw attention to the issue and to convince the right people to stop developing the AI systems, giving legislation a chance to catch up, he says.
Since then, the debate has been intense on social media and in the opinion pages in the newspapers. Most recently, Max Tegmark, Professor of Physics at the Massachusetts Institute of Technology, claimed that half of all AI researchers estimate that the risk of AI causing the downfall of humanity is approximately ten percent. His article in Time Magazine had a considerable impact – but it also met with opposition. Among other things, the survey was criticized for being too small and unscientific to be relevant.
But Max Tegmark is not the only one sounding alarm bells. Olle Häggström refers to an American AI researcher, Eliezer Yudkowsky, who in an article in Time Magazine also warns of the end of the world.
– He didn't even sign the open letter because he thought it underestimated the seriousness of the situation. According to Yudkowsky, the most likely result of building a superhuman AI is that everyone on the planet will die, says Olle Häggström.
It is, of course, the controversial ChatGPT that brought this loaded issue onto the agenda. Open AI's advanced language models have been both admired and criticized for their ability to produce human-like texts and reasoning. The question that divides the researchers is whether ChatGPT should be considered only a rudimentary language robot or whether it is a sign of a far more frightening development. The most pessimistic researchers believe that the machines are already capable of "thinking" on their own and that it is only a matter of time before they surpass human abilities.
– There are already examples that deep within the system itself, it understands reasoning and can draw its own conclusions, says Olle Häggström.
Once the technology reaches AGI level, Artificial General Intelligence, there is no going back, he believes. If in that situation we do not succeed in taming the machines, so-called AI alignment, they risk becoming brutal and autonomous and taking over man's place in the food chain. The world as we know it will be over.
– It may be that AGI is 20–30 years away. There may be hidden limitations that we are not aware of. But the most likely thing is that it will happen within a decade or even next year.
Olle Häggström has been accused of scaremongering, including in an op-ed article in Göteborgs-Posten. One of the authors was Moa Johansson, Associate Professor of Data Science and AI at Chalmers University and Gothenburg University.
– ChatGPT is an impressive system, but is nowhere near general intelligence. Lang- uage models are also not, as some believe, conscious in any way. There is no scientific evidence to suggest that they want to take the place of humans, she says.
The fact that the robots are considered to be so perceptive is, among other things, due to the fact that lots of people, usually in low-wage countries, have been hired to train the model. As people have interacted with the model and rated the answers it delivers as good or less good, even more data have been collected. This in turn has been used to train the model to learn what people consider to be a good or bad response.
– ChatGPT works well because it was trained on such enormous amounts of data, not just plain text, but also data about human preferences, says Moa Johansson.
She thinks it is important to clarify the mystery behind how the models work. Today's language models consist of a large neural network where the developers insert text and the machine makes probability calculations. Based on the information, it can then generate coherent text, poems, essays, stories, but also images and program codes. It is a fantastic technology, but it is not sentient in any sense, says Moa Johansson.
– We are so used to ascribing consciousness to all kinds of machines, robotic vacuum cleaners, lawnmowers and our laptops, she says.
Where Olle Häggström predicts disaster, Moa Johansson sees a limitation. According to her, there is no imminent risk of human extinction due to the language models, however amazing they may be.
– Just because you scale up a neural network, it is not a given that it will develop superintelligence, she says.
According to Moa Johansson, and many other AI researchers, we should instead focus more on immediate challenges. The risk of fake information, for example, has increased with ChatGPT, and for scammers and spammers it is a wonderful opportunity. Even people wanting to engage in political influence campaigns have been given an excellent tool. The language models can help to write customized texts with the right message for the right target group. Moa Johansson believes that this is the development we need to be vigilant about.
– From a sustainability point of view, it's not good either. It takes an incredible amount of energy to run these machines, she says.
Something that the researchers agree on is that legislation is required to control the development of AI. The new EU regulation, the AI Act, which is the first of its kind, is supposed to regulate the use of AI as a technology, but it is legally complicated and the decision has been delayed.
– Legislation is important and necessary. Furthermore, the tech companies need to take responsibility for their products and how they collect their training data, Moa Johansson believes.
Olle Häggström wants to see much more drastic measures. He agrees with AI researcher Eliezer Yudkowsky's demand for an international agreement that completely bans this type of research indefinitely. It also needs to have a strict framework akin to the rules surrounding nuclear proliferation, because the threat is at least as great.
– The regulation is proceeding so slowly that the AI Act will be outdated before it even comes into force. It would have been ineffectual against OpenAI's GPT models even if the US had been an EU country. This slow pace means that we need to work with other parallel approaches in addition to regulation, he says.
He sees the joint letter published on the Future of Life Institute website and Eliezer Yudkowsky's article in Time Magazine as steps in the right direction. By building consensus on the issue, the researchers can hopefully influence the corporate culture in the AI companies. And that the question has come up on the agenda at all, he sees as a small glimmer of hope in what, in his opinion, is a gloomy future.
The debate continues to be intense both in the research community and in the media. Both Olle Häggström and Moa Johansson say they have never spoken to so many journalists before.
– The AI disaster may well be around the corner, and if we are to succeed in averting it, we must stop burying our heads in the sand, says Olle Häggström.