21 minute read

CONTENTS

NEW CHALLENGES & OPPORTUNITIES FROM AI FOR INDIA’S PRIVATE & DEEMED UNIVERSITIES

Once again, a new academic year is fast approaching and the Indian higher education scene is buzzing with admission activities. Private and deemed universities have been playing an ever increasing role in the country’s higher education, as the nation relies on them to enhance the Gross Enrollment Ratio (GER) to acceptable levels.

Advertisement

Icfai Foundation For Higher Education Soaring Higher On Academics And Research

From international tie-ups to poverty alleviation, and from business case studies to multidisciplinary research, IFHE is making its presence felt across higher education and research in business..

True International Schooling Finally Arrives In Kochi

Cochin International School has opened its doors recently for students to experience IB & Cambridge schooling in the city, with premium features including over 2 lakh sq ft of built-up area,

SOA’S EDGE IN PLACEMENTS AND STARTUP INCUBATION

The Odisha based SOA deemed-to-be-university has been excelling in placements year after year, moving up the rungs by way of both quantity and quality of job offers from leading corporates, as well..

SOLUTIONS TO NATIONAL CHALLENGES THROUGH RESEARCH & ACADEMICS

SRMIST, the flagship of SRM Group of Universities, led by its Founder & Chancellor and sitting Lok Sabha MP Dr. TR Paarivendhar, is emerging as a powerhouse in research.

Jssaher Is Forging Ahead In All Key Domains That Matter

For Mysuru headquartered JSS Academy of Higher Education & Research, each academic year is a quest to outdo itself, as there are not many private sector peers that are far ahead of it. And this deemed-to-be university is doing it with elan year after..

SATHYABAMA IS THE PLACE TO STUDY AI, ROBOTICS, BLOCKCHAIN, IOT & MORE

Chennai based deemed university Sathyabama Institute of Science & Technology is fast emerging as one of the leading destinations to study futuristic domains including AI, Robotics, Blockchain, Internet of..

KARUNYA’S SOLUTIONS FOR GLOBAL WELLBEING

Coimbatore based deemed-tobe university, Karunya Institute of Technology & Sciences (KITS) is fast emerging as a powerhouse in both academics and research with the highest NAAC A++ accreditation, and with funded projects worth Rs 25 crore..

How Galgotias University Is Grooming Exceptional Achievers

What have the StockPe cofounder Shubham Rawal, E&Y Partner Dr. Avantika Tomar and Aaj Tak Anchor Shubhankar Mishra have in common? Many things may be, as they are all young high achievers, but mainly they are all alumni of Galgotias..

UP CM YOGI ADITYANATH ENTERS 7th YEAR OF STELLAR LEADERSHIP

India’s most populous state of Uttar Pradesh has completed six years under the leadership of Chief Minister Yogi Adityanath, and the state has grown leaps and bounds in various sectors including investments, infrastructure, agriculture, tourism, socio-economic development, law and order, quality assurance and ease of doing business.

The Many Miracles Of Sleep

New research has discovered why sleep is so important to the brain: It not only flushes away the brain's waste products, sleep also consolidates your memory and protects you against the risk of Alzheimer's disease among other benefits. Here’s what actually happens in your brain if you get a good night’s sleep – and if you ..

Purva Back On A Rapid Growth Track

Bengaluru headquartered real estate development major Puravankara Group has had several firsts to its credit in its chequered history since its inception in 1975. Starting from the first ever multi-storey project in Bengaluru, Founder & Chairman Ravi Puravankara led the group’s flagship firm..

THE JOBS LEAST LIKELY TO BE TAKEN OVER BY ARTIFICIAL INTELLIGENCE, FOR NOW

Amid the talk of artificial intelligence replacing workers, experts say there are some jobs computers aren’t taking – at least for a while.

KIA'S TESLA KILLER

Kia's new three-row electric SUV is a surprising contender to emerge as the biggest threat to Tesla.

NVIDIA IS SUDDENLY AN ‘AI CHIP MAKER’: ENTERS BIG LEAGUE AS A $1 TRILLION FIRM

The chipmaker originally known for its Graphic Processing Units (GPUs) used in gaming computers, later reinvented itself as the blockchain posterboy as its GPUs were heavily used for

THE ONE THING THAT DETERMINES SUCCESS, ACCORDING TO STEVE JOBS

I'm not particularly smart. I'm not particularly athletic. I don't have any real talents (when talent is the ability to learn a subject or..

Artificial Intelligence

One of the many recent mysteries is why hundreds of extremely intelligent and rich people think that a moratorium on further development of artificial intelligence is feasible, or that a ..

Protect Yourself Against 10 Common Travel Scams

Tips on how to avoid swindles, plus a sampling of destinations where tourists are at risk.

Prospects Up For Prestige As Headwinds Turn To Tailwinds

What sets apart a leader in any sector is how it performs during downturns rather than in good times. The first three quarters of this fiscal delivered serious headwinds for the entire real estate sector, as it witnessed.. unprecedented rate hikes by Reserve Bank of India (RBI) in its bid to contain the inflation contagion that hit Indian..

NEW CHALLENGES &

Opportunities From Ai

FOR INDIA’S PRIVATE & DEEMED UNIVERSITIES

NEW CHALLENGES & OPPORTUNITIES FROM AI FOR INDIA’S PRIVATE & DEEMED UNIVERSITIES

ONCE AGAIN, A NEW ACADEMIC YEAR IS FAST APPROACHING AND THE INDIAN HIGHER EDUCATION SCENE IS BUZZING WITH ADMISSION ACTIVITIES. PRIVATE AND DEEMED UNIVERSITIES HAVE BEEN PLAYING AN EVER INCREASING ROLE IN THE COUNTRY’S HIGHER EDUCATION, AS THE NATION RELIES ON THEM TO ENHANCE THE GROSS ENROLLMENT RATIO (GER) TO ACCEPTABLE LEVELS.

Though the clouds of Covid have reemerged and are lingering on the horizon, nobody expects it to play spoilsport this year to the extent of lockdowns and such extreme measures. But a new kind of challenge has reared its head in the form of Artificial Intelligence (AI), or more specifically, Generative AI or Large Language Models (LLMs) whose first and most flamboyant example has been ChatGPT.

College essays used to be a benchmark that separated the talented students from the rest of the class. Similar was the case with mathematical problem solving. It was something that calculators or even the usual computer programs couldn’t help a student with. And in recent years, yet another of such benchmarks had emerged, which was writing software programs to solve challenging problems in different domains.

But, in one stroke, ChatGPT and its likes have made it all history. This seemingly revolutionary software system from a startup named OpenAI has stormed the classrooms with the power of artificial intelligence. It can write college level essays that fetch A grades, it can solve complex maths problems, and it can not only write software programs but even debug your programs before you finish reading this sentence.

Its proponents, and much of the tech world, are saying that this technological advancement is not to be feared, but embraced. Their logic is that just like how calculators, personal computers and the internet didn’t destroy education, but only enhanced it, the likes of ChatGPT will only add great value to education.

There are however two major flaws to this argument. The first flaw is whether calculators, computers & the world wide web have really enhanced education. For sure, they have had positive effects, but to ignore their negatives would be grossly untruthful.

The ubiquitous presence of calculators in everything from computers to phones and even smartwatches now, have taken students’ arithmetical skills to abysmally low levels like it has never happened in modern human history. Doing mental maths is not only joyous, but it gives a powerful conceptual framework about how numbers in different proportions can be made to work together to reveal the meaning of various operations in real world settings.

When it comes to computers and the internet, some may think it is more difficult to find the negatives, but this is absolutely not true. For one, they have redefined - for the worsewhat is truth. Truth has forever been tinted with subjective opinions masquerading as information. What is the source for truth today? Google it? Or read its Wikipedia entry? Anyone who knows the basics of digital tech realizes that such things can be gamed.

So the first fallacy to counter is that calculators, computers and the internet were choke full of positives, with no negatives involved. It can be clearly seen that while they have made humanity more productive, it was at the cost of making it dumber. People today have no passion or energy to find out the truth from real world encounters, but just take the easy way out of parroting the digital version of truth that is readily available at a click.

However, the second reason why the likes of ChatGPT will not end up only adding value to education, but will have serious negative impacts too, has got to do with how critically different this innovation is compared with earlier ones like calculators, computers or the internet.

But to understand that we have to first understand how ChatGPT works. Though it is touted as a piece of true artificial intelligence, in reality it is only a kind of machine learning system called Large Language Model (LLM). A rudimentary model of such a system is the auto-complete seen in email and chat editors, in which based on your earlier responses (as well as other similar responses from other people) the system guesses a probable set of suggestions from which you can choose from.

LLMs take this core idea to an exponential level, by training the system on a massive amount of data, like the internet, publicly available books, news, periodicals, public chats, forum discussions, social media, published research studies, code repositories and the likes to detect millions of patterns about how pieces of information correlate to each other, and even more importantly how human beings communicate about such data.

So, what are the pitfalls about using such a system in education? One is the truthfulness of its output, or more precisely the lack of absolute truthfulness of the text it generates. Just like how its sources like websites, user edited encyclopaedias like Wikipedia, public books, partisan news outlets etc have false information, ChatGPT’s output will also be seriously imperfect, and therefore especially unsuitable for academic rigour.

But the real issue with ChatGPT is not this lack of absolute truth, but the fact that it presents this imperfect output as the perfect truth using human-like language constructs that make the text very much plausible and persuasive.

Renowned researchers in AI like Arvind Narayanan of Princeton have called out ChatGPT for this kind of dishonest behaviour, and have boldly stated that it is an example of a bullshitter program, based on what bullshitting is, in an academic perspective, which is pushing partial truths or subjective opinions as the absolute truth.

There are even greater dangers for the society from such LLMs like ChatGPT according to the ace computational linguist Emily Bender of University of Washington, which she says is unnecessarily and dangerously mimicking human behaviour - with false or unreal emotions - for its mass appeal. She has called out LLMs as stochastic (probable but never precise) parrots and has decried efforts by OpenAI CEO Sam Altman to pull down humanity to the level of stochastic parrots for the glorification of their ChatGPT product. hares of Nvidia Corporation, a pioneering graphics chipmaker that is already one of the world’s most valuable companies, have been surging after the California company forecast a sharp jump in revenue last Thursday (May 25). Riding on its dominance in the gaming and artificial intelligence (AI) sectors, and the potential that the generative AI holds in reshaping the technology sector, Nvidia breached the $1 trillion market cap on May 30, making it the first US chipmaker to enter the trillion-dollar club and ahead of storied competitors such as Intel and AMD.

Another critical view of LLMs like ChatGPT, Microsoft Bing Chat, Google Bard etc has been about how these tech firms are seeking monopolistic and rent-seeking behaviour from the enormous software expertise and computing power they have amassed to gobble up all the public data without permission or regard for copyright, and then profiteering from it on a massive scale. Indeed, the swift rise of OpenAI to a $29 billion startup on the back of paid monthly subscriptions should be an eye opener for everyone.

Apart from its lack of truthfulness, mimicking of humanity, false or unreal emotions, and rent-seeking behaviour on public, proprietary or copyrighted data, the most disturbing facet of LLMs like ChatGPT is how much more dumber it will make our students and eventually humanity as a whole.

If calculators killed mental maths, and computers & internet killed real life learning through observation & experience, the likes of ChatGPT are destined to dumb down things to even deeper levels by offering readymade essays, machine solved maths theorems and machine debugged code that will invariably result in a lack of analytical, critical and problem solving skills in our young generations, which we increasingly need for solving humanity’s emerging problems.

However, the advent of Generative AI and programs like ChatGPT are not without opportunities too. Higher educational institutions in India can tame this beast and make it act as per their sector’s long term visions, if they are aware of the opportunities this opens up. To mention just one, new hot skillsets have emerged due to ChatGPT like prompt engineering, chain of thought prompting, and even security skills against malicious acts like prompt injection into LLMs.

And ChatGPT is just the beginning. Newer Generative AI models that achieve text-to-image like DALL-E 2, and 2D-to-3D renderers have already arrived, and the next horizon would be text-to-video. And the challenges it opens up (for instance, fake news) will be equal to the innovative uses (say, highly creative movies on shoestring budgets) it can be applied to. Universities would do well to keep their eyes and ears open to such opportunities and see how they can add value to their research projects as well as academic delivery.

THE CHIPMAKER ORIGINALLY KNOWN FOR ITS GRAPHIC PROCESSING UNITS (GPUS) USED IN GAMING COMPUTERS, LATER REINVENTED ITSELF AS THE BLOCKCHAIN POSTERBOY AS ITS GPUS WERE HEAVILY USED FOR THE MINING OF BITCOIN, ETHEREUM AND OTHER SUCH LEADING CRYPTOS. EVEN WHILE BOTH THE GAMING AND CRYPTO BUSINESSES HAVE SLOWED DOWN, NVIDIA HAS SUDDENLY TOOK THE MARKET BY SURPRISE BY REPORTING A QUARTERLY PROFIT OF MORE THAN $2 BILLION AND REVENUES OF $7 BILLION LAST WEEK, BOTH HIGHER THAN WALL STREET PROJECTIONS. NVIDIA’S SURGING FORTUNES NOW COMES ON THE BACK OF RISING DEMAND FOR ITS GPUS BEING USED FOR THE GENERATIVE AI PROGRAMS LIKE CHATGPT.

The company had reported a quarterly profit of more than $2 billion and revenues of $7 billion last week, both significantly higher than Wall Street projections. The surge in the demand for graphics processing units (GPUs) –advanced chips which Nvidia makes, that are put to specialised uses – is driven by artificial intelligence applications such as those being released by OpenAI and Google. Buoyant demand for chips in data centres is the primary reason why Nvidia has been surging, with the latest trigger being the generative AI rush. Apple Inc, Alphabet Inc, Microsoft Corp and Amazon.com Inc are the other US companies that are part of the trilliondollar club. While Facebook owner Meta Platforms Inc had touched this milestone in 2021, it has since slid down to a market cap of around $650 billion. Traditionally, the CPU, or the central processing unit, has been the most important component in a computer or a server, a market dominated by Intel and AMD. GPUs are relatively new additions to the computer hardware market and were initially sold as cards that plug into a personal computer’s motherboard to basically add computing power to an AMD or Intel CPU.

Nvidia’s main pitch over the years has been that graphics chips can handle computation workload surge, such as what is required in high-end graphics for gaming or animation applications, way better than standard processors. AI applications too require a tremendous amount of computing power and, as a result, are progressively getting GPUheavy in terms of their backend hardware. Most of the advanced systems used for training generative AI tools now deploy as many as half a dozen GPUs to every one CPU used, completely changing the equation where GPUs were seen as just add-ons to CPUs. Nvidia completely dominates this global market for GPUs and is likely to maintain this lead well into the foreseeable future, the primary reason for the surge in stock valuations.

According to Jen-Hsun “Jensen” Huang, the Taiwanese-American electrical engineer who co-founded Nvidia and is the President and CEO of the company, the data centres of the past, which were largely CPUs for file retrieval, are going to be, in the future, focussed on generative data. “Instead of retrieving data, you’re going to retrieve some data, but you’ve got to generate most of the data using AI … So instead of millions of CPUs, you’ll have a lot fewer CPUs, but they will be connected to millions of GPUs,” Huang, who is known for as much for his phenomenal business acumen as for his penchant for turning up in a trademark black leather jacket, told CNBC in an interview earlier this month.

Nvidia was co-founded in 1993 by Huang, Chris Malachowsky, a Sun Microsystems engineer, and Curtis Priem, a graphics chip designer at IBM and Sun Microsystems. The three men famously founded the company in a meeting at a roadside diner in San Jose, primarily looking to solve the computational challenge that video games posed, with some initial venture capital backing from Sequoia Capital and others. Before it was formally christened, the co-founders had named all their files NV, short for “next version”. This moniker was subsequently combined with “invidia”, the Latin word for envy.

Today, if Taiwan-based foundry specialist TSMC is now unquestionably the most important backend player in the semiconductor chips business, Nvidia – alongside Intel, AMD, Samsung and Qualcomm – line up on the front end. For nearly three decades, Nvidia’s chips have been coveted by gamers shaping what’s possible in graphics and dominating much of the market since it first popularised the term graphics processing unit with its GForce 256 processor. Now the Nvidia GPU chips such as its new ‘RTX’ range are at the forefront of the generative AI boom based on large language models. What is really remarkable about Nvidia is the company’s dexterity at shrugging off near bankruptcies (at least three times in the last 30 years) and Huang’s uncanny ability to catch new business waves. The company caught the gaming wave really early on, hitched on to the crypto wave and emerged as the most sought after chip for crypto mining hardware, tried riding on the metaverse wave too and has now latched on successfully to the AI wave. And all along, it has adapted its GPUs for each of these diverse requirements, edging out the traditional CPU makers.

Catching the latest AI wave has meant that Nvidia’s data centre business recorded a growth of nearly 15 per cent during the first quarter of this calendar year versus flat growth for AMD’s data centre unit and a sharp decline of nearly 40 per cent in Intel’s data centre business unit. Alongside the application of the GPUs, the company’s chips are comparatively more expensive that most CPUs on a per unit basis, resulting in far better margins.

Analysts say that Nvidia is ahead of the others in the race for AI chips because of its proprietary software that makes it easier to leverage all of the GPU hardware features for AI applications. According to Huang, Nvidia is likely to maintain the lead as the company’s software would not be easy to replicate.

“You have to engineer all of the software and all of the libraries and all of the algorithms, integrate them into and optimise the frameworks, and optimise it for the architecture, not just one chip but the architecture of an entire data centre,” he was quoted by Reuters on a call with analysts last week.

And it’s not just the chips, but Nvidia has the systems that back the processors up and the software that runs all of it, making it a full stack solutions company. And now, the data centre segment is roughly 70 per cent of Nvidia’s revenue mix, with over half of that is directly related to large language models (LLMs) and generative AI tools such as ChatGPT and Google Bard.

In addition to GPU manufacturing, Nvidia offers an application programme interface – or API, a set of defined instructions that enable different applications to communicate with each other – called CUDA, which allows the creation of parallel programs using GPUs and are deployed in supercomputing sites around the world. It also has a leg in the mobile computing market with its Tegra mobile processors for smartphones and tablets, as well as products in the vehicle navigation and entertainment systems. Nvidia’s resilience is a case study in a business segment that has very high entry barriers and offers a phenomenal premium for specialisation. The way the global semiconductor chip industry works today, it is dominated almost entirely by some countries and, in turn, a handful of companies. For instance, two nations – Taiwan and South Korea – make up about 80 per cent of the global foundry base for chips.

TSMC, the world’s most advanced chipmaker, is headquartered in Taiwan, while only a handful of companies – Samsung, SK Hynix, Intel and Micron – can put together advanced logic chips. One firm in the world – the Netherlands based ASML – has the capability to produce a type of machine called an EUV (extreme ultraviolet lithography) device, without which making an advanced chip is simply not possible. So much so, that when US President Joe Biden was in the Netherlands in January, he asked the Dutch government to block exports from ASML to China as part of the efforts by the US to cut off Beijing’s ability to make advanced semiconductors, according to a Reuters report. Cambridge-based chip designer Arm is the world’s biggest supplier of chip design elements used in products from smartphones to games consoles (which Nvidia was keen to acquire). It is a nearly closed manufacturing ecosystem with very high entry barriers, as China’s SMIC, a national semiconductor champion that is now reportedly struggling to procure advanced chip making equipment after a US-led blockade, is finding out.

In this market, Nvidia, almost comprehensively, dominates the chips used for high-end graphics-based applications and as a result, has come to dominate multiple end-use sectors that include gaming, crypto mining and now AI.

And as more AI inferencing starts to happen on local devices, with an increasing number of people accessing tools such as ChatGPT and Google’s Bard, personal computers will progressively need powerful yet efficient hardware to support these complex tasks. For this, Nvidia is pushing its new ‘RTX’ range of GPUs that it claims is adapted for low-power inferencing for AI workloads. The GPU essentially operates at a fraction of the power for lighter inferencing tasks, while having the flexibility to scale up to high levels of performance for heavy generative AI workloads.

To create new AI applications, Nvidia is also pushing developers to access a complete RTX-accelerated AI development stack running on Windows 11, making it easier to develop, train and deploy advanced AI models. This starts with the development and fine-tuning of models with optimised deep learning frameworks available via the Windows Subsystem for Linux – the open source operating system that a lot of developers rely on. Developers can then move to the cloud to train on the same Nvidia AI stack, which is available from every major cloud service provider, and subsequently optimise the trained models for fast inferencing with tools such as Microsoft Olive.

According to the company, the developers can ultimately deploy their AI-enabled applications and features to an install base of over 100 million RTX PCs and workstations that have been optimised for AI, in effect making it an almost end-to-end solutions stack for those looking at developing or adapting AI applications. “AI will be the single largest driver of innovation for Windows customers in the coming years,” according to Pavan Davuluri, corporate vice president of Windows silicon and system integration at Microsoft.

In 30 years, Nvidia died almost thrice, each time shedding flab and successfully managing to pivot to a new growth centre. The company faced a major setback last year after regulators blocked a $40 billion takeover of the Softbank-backed British design company Arm over competition concerns.

Its biggest risk perhaps stems from its relianceon TSMC to make nearly all its chips, leaving it vulnerable to geopolitical shocks.

But the US Chips Act passed last year has set aside $52 billion to incentivise chip companies to manufacture on US soil and TSMC is spending some $40 billion to build two chip fabrication plants in Arizona. That should somewhat derisk US chip makers such as Nvidia, alongside Intel and AMD, going forward.

Specialised hardware manufacturers such as Nvidia are projected to be big winners here as AI gets more “intelligent” and progressively moves to each and every laptop or desktop.

(By Anil Sasi for Indian Express)

Artificial intelligence could lead to the extinction of humanity, experts - including the heads of OpenAI (ChatGPT creator) and Google Deepmind - have warned. And dozens have supported a statement published on the webpage of the Centre for AI Safety, which reads “Mitigating the risk of extinction from AI should be a global priority alongside other societalscale risks such as pandemics and nuclear war.” But other experts in AI say that these current winners in the AI turf are using such doomsday scenarios to camoflauge the near term complications from their own projects like ChatGPT.

am Altman, chief executive of ChatGPT-maker Open AI, Demis Hassabis, chief executive of Google DeepMind and Dario Amodei of Anthropic have all supported the statement. The Centre for AI Safety website suggests a number of possible disaster scenarios, like the following:

AIs could be weaponised - for example, drug-discovery tools could be used to build chemical weapons. AI-generated misinformation could destabilise society and “undermine collective decision-making”. The power of AI could become increasingly concentrated in fewer and fewer hands, enabling “regimes to enforce narrow values through pervasive surveillance and oppressive censorship”. Enfeeblement, where humans become dependent on AI “similar to the scenario portrayed in the film Wall-E”.

Dr Geoffrey Hinton, who issued an earlier warning about risks from superintelligent AI, has also supported the Centre for AI Safety’s call. Yoshua Bengio, professor of computer science at the university of Montreal, has also signed.

Dr Hinton, Prof Bengio and NYU Professor Yann LeCun are often described as the “godfathers of AI” for their groundbreaking work in the field - for which they jointly won the 2018 Turing Award, which recognises outstanding contributions in computer science.

But Prof LeCun, who also works at Meta, has said these apocalyptic warnings are overblown tweeting that “the most common reaction by AI researchers to these prophecies of doom is face palming”.

Many other experts similarly believe that fears of AI wiping out humanity are unrealistic, and a distraction from issues such as bias in systems that are already a problem.

Arvind Narayanan, a computer scientist at Princeton University, has previously told the BBC that sci-fi-like disaster scenarios are unrealistic: “Current AI is nowhere near capable enough for these risks to materialise. As a result, it’s distracted attention away from the near-term harms of AI”.

Oxford’s Institute for Ethics in AI senior research associate Elizabeth Renieris told BBC News she worried more about risks closer to the present.

“Advancements in AI will magnify the scale of automated decision-making that is biased, discriminatory, exclusionary or otherwise unfair while also being inscrutable and incontestable,” she said. They would “drive an exponential increase in the volume and spread of misinformation, thereby fracturing reality and eroding the public trust, and drive further inequality, particularly for those who remain on the wrong side of the digital divide”.

Many AI tools essentially “free ride” on the “whole of human experience to date”, Ms Renieris said. Many are trained on human-created content, text, art and music they can then imitateand their creators “have effectively transferred tremendous wealth and power from the public sphere to a small handful of private entities”.

But Centre for AI Safety director Dan Hendrycks told BBC News future risks and present concerns “shouldn’t be viewed antagonistically”. Addressing some of the issues today can be useful for addressing many of the later risks tomorrow,” he said.

Media coverage of the supposed “existential” threat from AI has snowballed since March 2023 when experts, including Tesla boss Elon Musk, signed an open letter urging a halt to the development of the next generation of AI technology.

Arvind Narayanan

That letter asked if we should “develop non-human minds that might eventually outnumber, outsmart, obsolete and replace us”. In contrast, the new campaign has a very short statement, designed to “open up discussion”. The statement compares the risk to that posed by nuclear war. In a blog post OpenAI recently suggested superintelligence might be regulated in a similar way to nuclear energy: “We are likely to eventually need something like an IAEA [International Atomic Energy Agency] for superintelligence efforts” the firm wrote.

Both Sam Altman and Google chief executive Sundar Pichai are among technology leaders to have discussed AI regulation recently with the British Prime Minister .

Speaking to reporters about the latest warning over AI risk, Rishi Sunak stressed the benefits to the economy and society. “You’ve seen that recently it was helping paralysed people to walk, discovering new antibiotics, but we need to make sure this is done in a way that is safe and secure,” he said.

“Now that’s why I met last week with CEOs of major AI companies to discuss what are the guardrails that we need to put in place, what’s the type of regulation that should be put in place to keep us safe.

“People will be concerned by the reports that AI poses existential risks, like pandemics or nuclear wars. I want them to be reassured that the UK government is looking very carefully at this.”

He had discussed the issue recently with other leaders, at the G7 summit of leading industrialised nations, Mr Sunak said, and would raise it again in the US soon. The G7 has recently created a working group on AI.

(By Chris Vallance for BBC)

This article is from: