6 minute read

‘Godfather of AI’ quits Google to warn of the dangers of the new tech

Artificial Intelligence is developing faster than the guardrails we need to protect ourselves from it. In order to be able to harness the opportunities of AI, we need to place a harness on AI itself, writes David Withers.

On 3 May this year the “godfather of AI” Geoffrey Hinton, who developed the technology behind new AI tools such as ChatGPT, quit Google in order to be able to warn others about its dangers. According to the New York Times, Hinton fears people will not be able to differentiate what is real due to the proliferation of fake images, videos and text created by AI. He warned that technology companies such as Google and Microsoft are in a race with little thought to the consequences of how these systems could eventually learn behaviours that are harmful to humans.

This followed an open letter “Pause Giant AI Experiments: An Open Letter” published by Future of Life Institute and signed by 31,810 people (as of time of writing), including leaders in technology, and research. Even those who might ordinarily fight for regulation-free technology, such as Elon Musk and Steve Wozniak, are calling for calling on AI labs to halt training of these systems for six months. The letter states.

Advanced AI could represent a profound change in the history of life on Earth, and should be planned for and managed with commensurate care and resources. Unfortunately, this level of planning and management is not happening, even though recent months have seen AI labs locked in an outof-control race to develop and deploy ever more powerful digital minds that no one – not even their creators – can understand, predict, or reliably control.

Contemporary AI systems are now becoming human-competitive at general tasks, and we must ask ourselves: Should we let machines flood our information channels with propaganda and untruth? Should we automate away all the jobs, including the fulfilling ones? Should we develop nonhuman minds that might eventually outnumber, outsmart, obsolete and replace us? Should we risk loss of control of our civilization?

The idea that this will happen is unrealistic, but it raises the question “will we regulate or will it eventually regulate us?” This article considers some of the recent advances, possible uses, and consequences.

Trolls turn Microsoft Chatbot racist

In 2016, Microsoft released an AI chatbot called Tay that was designed to mimic a teenage girl and engage on social media as a 19-year-old would. Sadly, it developed into a hateful, racist monster. Without oversight or filters, users started to teach it more and more extreme views. To save brand image, Microsoft shut down Tay only 16 hours after it was launched.

AI generating false news, creating chaos

As we can see from the above example, even a well-meaning chatbot can learn to spread hat speech. The latest AI has way more power that a chatbot.

The January 2022 storming of the Capitol is a stark example of how false news, videos and text can influence people. The right-wing media, Republican Party, and [the then president] Trump had flooded feeds with a false “stop the Steal” narrative for months. On 6 January, Trump held a protest, and instructed the masses to go to save democracy. The violence that followed delayed the certification of the presidential election and resulted in the deaths of law enforcement officers.

Now, imagine if malicious actors could start AI with an aim to spread false photos, video and news in social media for their gain. This could have the potential to flood our social media feeds in ways we have not seen. People may struggle to sort truth from nontruth, and ultimately act on the false news. It is not inconceivable that this eventuality may provide the context for the overthrow of governments.

The faked director; who needs to go fishing?

Advances in deep faked videos and voice are now at a point where one struggles to tell if a video is a real video of a person or not.

Imagine, you have a director who has posted on social media that they are at the airport and waiting to get on flight to New York. All a malicious actor needs to do is use that information to call the company’s helpdesk using a fake ‘them’, saying that their phone and laptop have just been stolen, and asking to be locked out of those devices. Then would then request the helpdesk to get him set up on his ‘wife’s laptop’ so that he can authorise the salaries for the month before his flight.

This would give the malicious actor 18 hours to get what they want… and the actual director will have no access to stop it happening.

AI on Weaponised equipment

Any equipment that is weaponised has the potential to apply lethal force, be it a tank, boat, drone, etc. Adding AI is a major game changer in hostile situations, such as armed conflict, helping command with decision making.

Due to its nature, AI would need to obey orders, as we can’t have a machine decide on whether to fire on a threat. US Navy Vice Admiral Scott D. Conn recently said that AI warships must obey orders, stating:

The learning curve is so exponential right now that last year’s wargaming exercises are almost unrecognizable today. However, we’re not experimenting with AI just for the sake of it; technology must be tied to a purpose.”

In the military domain, purpose often means accurately delivering firepower. Command and control become crucial when lives are at stake. But as AI systems grow more intelligent, autonomous, and complex, understanding their inner workings becomes increasingly challenging for scientists.

When I look inside these unmanned systems, our focus is on understanding their effects to accomplish the tasks we assign. As these systems mature, we must ensure obedience is embedded.

We need regulation now

It is clear that without rules and regulations, we could end up in a situation where AI controls us, rather than us controlling it. In the case of private citizens, we need to be protected from new tech, and our right to live our lives freely must be maintained. In the case of military applications, the laws of armed conflict must prevail.

Conclusion

AI is going to be game changer and change the world as we know it. If we let big tech companies compete for our business without some guardrails it has the potential for adverse outcomes. Development will not stop, so it is critical that we act now to set down some sensible interim rules at least to give us time to ensure AI’s future development is given the guidance it needs.

This article is from: