1 minute read
Is AI a cause for concern?
On May 24th, Sam Altman, the CEO of OpenAI, spoke at University College London in a highly anticipated on-stage conversation, first between Altman and Azeem Azhar, before being joined by Margaret McCabe (Founder and Group CEO of Debate Mate), Professor David Barber (Director of the UCL Centre for Artificial Intelligence) and Professor Yvonne Rogers (Director of the UCL Interaction Centre, UCL Computer Science).
Altman spoke about the fact that, while people are justified in their worries regarding the effects of AI, the benefits of its utilisation largely outweigh the concerns surrounding it, and that the correct kind of regulation - a little, but not too much - is also something to be welcomed: “something between the traditional European approach and the traditional US approach.” However, aside from regulation, Altman believes that: “the real solution is to educate people about what’s happening” in order to ensure people understand the ins-and-outs of AI and what it is capable of. This way, they can comprehend the risks for themselves, the same way that the general public understands that the use of any kind of technology has its own risks.
Altman also expressed his hope regarding the future of AI and its role in the reduction of inequality, adding that this “technological revolution” will open the doors for many more jobs in the future. He stated that the world will be lifted by this new technology: “My basic model of the world is that the cost of intelligence and the cost of energy are the two limited inputs, sort of the two limiting reagents of the world. And if you can make those dramatically cheaper, dramatically more accessible, that does more to help poor people than rich people.” He went on to state that he is: “pretty happy about the trajectory things are currently on.”
In a series of tweets posted earlier this year, Altman wrote that while moving forward into a future involving AI could happen as swiftly as the one from: “pre-smartphone to the postsmartphone era”, society would need time to adapt and to figure out how best to regulate AI, as we are: “potentially not that far away” from scarier models and regulation is: “critical”.
ROHINI BHONSLE-ALLEMAND Assistant Editor blog.samaltman.com