2 minute read
Don’t Create Frankenstein’s Monster: Open Letter Warns A.I. Empires
April 15, 2023
In the early years of the 20th century, when technological advancement was picking up pace, English author Aldous Huxley picturised a dystopian society in his landmark 1932 book Brave New World. The novel’s storyline delivered a prophetic warning to a world getting increasingly charmed by technological progress.
Huxley’s cryptic message was disturbing –by nurturing genetic technology, the species Homo Sapiens might someday end up creating a monstrous superhuman.
Since the book hit the stands, 91 years have elapsed. From genetic technology to AI (artificial intelligence), it has been a long and complicated journey for science and technology. But the warnings given by the writer-philosopher now seem to be resonating louder than ever before, with a large and influential section of the global science, tech, and academics community calling for an immediate end to modern-day Big Tech empires’ imperialist overreach.
With AI technology growing from strength to strength at breakneck speed, an open letter has been signed and shot off by top names across tech and cultural circles, calling for an immediate suspension of all major AI experiments for at least six months.
The public letter came in the backdrop of the development of GPT-4, the latest updated version of ChatGPT, an AI-based chatbot. It’s computer programme that allows humans to interact with digital devices as if they were communicating with a real person.
Launched on November 30, 2022 by San Francisco startup OpenAI, in which tech giant Microsoft has made huge investments, the chatbot reached 100 million monthly active users within two months. The revolutionary feature of ChatGPT is that it is trained to learn what humans essentially mean when they ask a question.
Superhuman Powers
Less than four and half months into its existence, ChatGPT has become a global talking point, with media analysts, technological experts, and nextdoor neighbours calling it a game-changer for its bewildering capabilities. The awestruck world’s fixation with what has been described as the latest technological marvel hit a further high with the launch of ChatGPT’s latest variant, GPT-4, on March 14.
In a research paper on an early version of GPT4, Cornell University scientists noted that the AI chatbot has been trained using an unprecedented scale of computing and data. They said that “beyond its mastery of language, GPT-4 can solve novel and difficult tasks that span mathematics, coding, vision, medicine, law, psychology, and more, without needing any special prompting.
“Moreover, in all of these tasks, GPT-4’s performance is strikingly close to human-level performance, and often vastly surpasses prior models such as ChatGPT,” the researchers noted.
On the flip side, ChatGPT has come under growing criticism for its potential to take over many human tasks, which may have disastrous consequences for the working class. For example, educationists have claimed that since students now have ChatGPT at their disposal, schools would think twice before giving homework. There are fears that ChatGPT could take away millions of white-collar jobs in at least 10 sectors.
However, as Empire Diaries recently pointed out in a special report, it’s technically wrong to blame ChatGPT itself for sparking sackings. To put it in a reductionist way, ChatGPT is just a lifeless technology that itself can’t sack anybody. Such decisions are, and would be taken, by ruthless corporate tzars, who use tech tools such as ChatGPT as convenient excuses to fire employees.
GPT-4 has also been drawing flak across the world for peddling misinformation and toxic content, which OpenAI admitted in its own technical report. The Silicon Valley company revealed that the prototype or early version of GPT-4, when prompted by user queries, could suggest users how to kill people, practise antisemitism, and even issue gangrape threats. OpenAI assured that such behaviour of the technology was corrected in the version launched for mass use.
(Turn to Page 09)