8 minute read

Why a principled approach to Artificial Intelligence matters: A view from the UK

Rt Hon. Chloe Smith, MP is the UK Member of Parliament for Norwich North since 2009. Chloe was the Secretary of State for Work and Pensions and Science, Innovation and Technology in 2022 and 2023.

Introduction

Many Commonwealth nations have elections in 2024, covering billions of voters, including:

• Bangladesh, Botswana, Ghana

• India, Kiribati, Maldives

• Mauritius, Mozambique, Namibia

• Pakistan, Rwanda, Solomon Islands

• South Africa, Sri Lanka, Tuvalu

• Most likely in the UK as well as the USA, Indonesia, Mexico

While digital infrastructure will vary between nations, for many citizens these elections will have a major online element. Integrity of information matters so that people’s free choices achieve what they intend. Artificial intelligence (AI) will be a player. For some it is a threat to information integrity, for others it is an opportunity, such as in Mozambique, which has adopted an AI-powered platform to combat election disinformation.

My background

I was proud to work in a groundbreaking Cabinet partnership as the UK’s Science, Innovation and Technology Secretary of State with Rt Hon. Michelle Donelan, MP during her maternity leave in 2023, in which I set the foundations for UK Government policy which will be critical for years to come. I took the decision that our global summit on AI safety, held in November 2023 at Bletchley Park, should go ahead, and set its direction and outcomes. I recruited the chair of the Frontier AI Taskforce. I established governance and accountability inside government on AI, and cohered the approach across all departments, reviewing the potential risks and the national security implications of AGI (Artificial general intelligence).

So I hope I might be able to contribute to discussions on this topic by setting out some of the measures taken by the UK Government, and to an extent governments around the globe, in their approach to AI.

Why a principled approach to AI matters

I’d like to start by setting out why we in the UK think that a principled approach to AI matters. There is certainly a huge opportunity and probably a huge risk in this technology. We don’t know it all yet but we suspect that it is going to change everything we do – education, business, healthcare, defence – and the way that we live. There could be a revolution in the quality and responsiveness of public services if the right calls are made. That may well include parts of the apparatus that delivers and sustains elections.

I believe accelerating AI investment, deployment and capabilities represent enormous opportunities for public good. They bring economic growth, of course - the prospect of up to US$7 trillion in growth over the next 10 years. But if there is risk as well as opportunity, we have to prepare for both and to insure against the first.

There is no consensus on the risks of AI, but they are profound and possible. Combine that point with the speed of change, the possibility that governments are outpaced, and citizens rightly need reassurance. That is why the British Government’s position is “act before mistakes happen”

Safety is going to be the determining factor of success. People must have as much faith in this technology as they do in stepping on an aeroplane. Without safety there would be no airline industry – and the same is true of AI. The UK Government is delivering on that vision through the Frontier AI Taskforce – originally chaired by leading tech entrepreneur Ian Hogarth – and now evolving into the AI Safety Institute.

The Institute is the first state-backed organisation focused on advanced AI safety for the public interest. Its mission is to minimise surprise to the UK and humanity from rapid and unexpected advances in AI. It will work towards this by developing the sociotechnical infrastructure needed to understand the risks of advanced AI and enable its governance.

The UK Government held the global AI safety summit in November 2023, with other Commonwealth and international partners such as the USA and the EU. It was a generational moment, underpinned by the landmark Bletchley Declaration, in which the UK convened the world’s leading powers to agree a shared understanding of the risks and opportunities of frontier AI, and the need to collaborate further to address them.

Building upon this, discussions during the Summit brought together a small but expert group of governments, businesses and leading thinkers from academia and civil society to navigate a new course for AI safety. It explicitly examined elections, under the scope of work on risks from the integration of frontier AI into society, including election disruption, bias, impacts on crime and online safety and exacerbating global inequalities.

Rightly, the conversation will be continuing thereafter, to be inclusive, and more widely than only on frontier AI. The tech does not follow national boundaries so any meaningful global agreements must extend in the end to all countries. Indeed, this does not only come down to countries or companies that are keen and transparent. Malign actors are important. If we want safety, in an open-source world, we have to deal with all issues as widely as possible.

AI can and should be a tool for all. Yet any technology that can be used by all can also be used for ill purposes. In most countries that have elections in 2024, there will be parties, movements or forces that seek to use technology to secure their own goals. This will not always be pretty. It will be a complex and probably confusing picture. Every country, every company, every party, every user and every citizen has their own role to play in maintaining integrity.

Regulation in the UK

In the UK, we will be using regulation to achieve the next layer of protection against the risks of AI, and to nurture the opportunities for citizens. In the UK Government’s White Paper, we have set out the approach here in the UK of tasking existing regulators to oversee the applications of AI in their spheres.

Our framework is underpinned by five principles to guide and inform the responsible development and use of AI in all sectors of the economy: Safety, security and robustness; Appropriate transparency and explainability; Fairness. This approach recognises that particular AI technologies can be applied in many different ways and this means the risks can vary hugely. The philosophy is to support innovation while providing a framework to ensure risks are identified and addressed. Rather than target specific technologies, it focuses on the context in which AI is deployed. I think this will be a proportionate and more agile framework. A heavy-handed and rigid approach risks

World Leaders And Global Experts Come Together At First Global Ai Safety Summit

As artificial intelligence rapidly advances, so do the opportunities and the risks. The UK Government hosted the first global AI Safety Summit in November 2023, bringing together leading AI nations, technology companies, researchers and civil society groups to turbocharge action on the safe and responsible development of frontier AI around the world.

28 countries from across the globe including Africa, the Middle East, the USA and Asia, as well as the EU, agreed The Bletchley Declaration on AI Safety , which recognised the urgent need to understand and collectively manage potential risks through a new joint global effort to ensure AI is developed and deployed in a safe, responsible way for the benefit of the global community. World leaders and those developing frontier AI systems recognised the need to collaborate on testing the next generation of AI models against a range of critical national security, safety and societal risks. Partners agreed that the ‘Godfather of AI’ Professor Yoshua Bengio, a Turing Award winning AI academic and member of the UN’s Scientific Advisory Board, will lead the delivery of a ‘State of the Science’ report, which will help build a shared understanding of the capabilities and risks posed by frontier AI.

The UK Prime Minister, Rt Hon. Rishi Sunak hosted the AI Summit and said:

“Until now the only people testing the safety of new AI models have been the very companies developing it. We shouldn’t rely on them to mark their own homework, as many of them agree. Today we’ve reached a historic agreement, with governments and AI companies working together to test the safety of their models before and after they are released. The UK’s AI Safety Institute will play a vital role in leading this work, in partnership with countries around the world.”

The Deputy Prime Minister of Australia, Hon. Richard Marles, MP said:

“Australia welcomes a secure-by-design approach where developers take responsibility. Voluntary commitments are good but will not be meaningful without more accountability. Australia is pleased to partner with the UK on this important work.”

The Canadian Minister of Innovation, Science and Industry, Hon. François-Philippe Champagne, MP said:

“Canada welcomes the launch of the UK’s AI Safety Institute. Our government looks forward to working with the UK and leveraging the exceptional Canadian AI knowledge and expertise, including the knowledge developed by our AI institutes to support the safe and responsible development of AI.”

The Singaporean Minister for Communications and Information, Hon. Josephine Teo said:

“The rapid acceleration of AI investment, deployment and capabilities will bring enormous opportunities for productivity and public good. We believe that governments have an obligation to ensure that AI is deployed safely. We agree with the principle that governments should develop capabilities to test the safety of frontier AI systems.”

The AI Summit was held at the historic Bletchley Park in the English countryside that became the principal centre of Allied code-breaking during the Second World War.

This article is from: