7 minute read
AI society: How people can make artificial
by OECD
AI society: How people can make artificial intelligence work for all
“An instrument of the devil”. This is how the invention of the telephone was greeted in Sweden in the mid-19th century, according to Ericsson, a telecommunications firm. From phones to televisions and cars, such horror is quite a common public reaction to the advent of new technologies, even if it often doesn’t take too long before the same people wonder how they ever did without them.
Optimists can also get it wrong too, though. Some early supporters of the phone in Sweden emphasised less the benefits of communication than their belief that the pulse from phone signals would bring relief to their rheumatism.
Artificial intelligence inevitably finds itself subject to similar discussion. It is not wrong to be critical of early technology, but it is also important to remain open to new opportunities. This is the challenge for policymakers: how to take the fullest advantage of AI to deliver the widest possible benefits, while reducing the risks?
Artificial intelligence, or AI as it is popularly known, has evolved greatly since it was
first conceptualised in the Dartmouth Summer Research Project at Dartmouth College in the US state of New Hampshire in 1956. Indeed, since then AI has acquired the potential to reshape economies, by stimulating productivity, improving
AI is still in its infancy, and we do not know how powerful AI can become
efficiency and lowering costs. Its applications are to be seen in a range of sectors, from transport and farming to finance, and healthcare, as well as in criminal justice and security. It can even help improve governance in both the public and private sectors, for instance, by helping people make better predictions and more informed decisions.
Yet, AI is still in its infancy, and although surveys suggest most business people see AI as an advantage, we do not know how powerful AI can become. In this light, it is inevitable that AI should fuel anxieties and ethical concerns, too. There are questions to answer about the trustworthiness of AI, when it comes to privacy for instance, and about the risks of reinforcing any existing biases on race or gender in the algorithms that underpin AI, or even infringing people’s rights. Concerns are also growing about AI systems exacerbating inequality, market concentration and the digital divide.
No single country or actor has all the answers to these challenges. As AI’s impacts permeate our societies, what we do know is that its undoubted transformational power must be put at the service of people and the planet. We therefore need international cooperation and responses from all interests in society to guide the development and use of AI for the wider good.
The OECD has been leading a wideranging reflection on the issues, and formed an international expert group it in 2018 to help with scoping principles for artificial intelligence in society. The new group won wide applause, including by world chess champion Garry Kasparov, who as he put it in a video address, was perhaps the first knowledge worker in history to have his job threatened by a machine when
he was beaten in chess by an AI-driven supercomputer in 1997.
The expert group’s discussions inspired the OECD Principles on Artificial Intelligence—the first international standard on AI—which was adopted by all OECD members and by several partner countries on 22 May 2019. These principles focus on simple, yet essential values, such as transparency, accountability and human rights, for AI to gain acceptance and become a reliable, human-centred technology. Only then will AI be able to deliver on its promise of benefiting both people and the planet (see references).
An OECD report, Artificial Intelligence in Society, examines the evolving AI landscape and highlights key policy questions.
What makes AI different is that it enables technology to learn by doing: it uses large datasets and powerful computing to help it quickly choose the best route among alternative paths forward. This gives AI more autonomy than conventional technology. Deep Blue, the chess-playing computer that outwitted Kasparov, was able to predict and quickly respond as the board developed. Breakthroughs in machine learning since 2011 have improved machines ability to make predictions from data, while the maturity of modelling techniques such as “neural networks”, along with even greater computing power, have helped spur AI’s recent growth.
Business investors have perked up their interest too. Private equity investment in AI start-ups accelerated from 2016, after five years of steady increases, doubling to US$16 billion in 2017. AI start-ups attracted 12% of worldwide private equity investments in the first half of 2018, up from just 3% in 2011, with all major economies involved, albeit with some variation across companies and industries. The OECD report describes just how OECD and non-OECD countries alike are jockeying for position on the AI stakes, with strategies, plans and initiatives in large and small countries alike. AI applications are permeating many sectors, with the prospect of noticeable benefits for productivity and solving complex challenges. One much publicised example concerns efforts to develop autonomous or semi-autonomous transport vehicles as a way of reducing traffic congestion and pollution, and improving road safety. In healthcare, AI
Designing AI systems that are transparent and accountable is critical
systems could help diagnose and prevent disease outbreaks early on, and can help farmers better monitor crop and soil health, and tailor their treatment. In criminal justice, AI enables predictive policing and assessing reoffending risk, as well as combing facial-recognition databases to identify criminals.
One leading public concern is how AI will change the nature of work as it replaces or alters labour. Policies to promote constant education, training and skills development, and to empower people as they move from one job to another, are vital. Meanwhile, thanks to AI, new jobs are being created, including that of coaching robots, as well as jobs demanding soft skills for sectors such as health and education.
It is in crime and security where public policy become particularly sensitive. How reliable is AI and how accurate is the data AI is using? What room do people have to challenge it, particularly in light of the fact that, as the OECD report points out, some AI systems are so complex that explaining their decisions may be impossible?
In financial services where AI can help detect fraud, there are questions about it being used to assess credit-worthiness or to allocate health insurance, based on personal data that may be acquired without the knowledge or consent of private individuals. Likewise, the potential invasiveness of marketing using AI to mine data on personal consumer behaviour is another divisive subject of debate. On such challenges, the authors of Artificial Intelligence in Society are clear: AI systems must function properly and in a secure and safe manner, so designing systems that are transparent about the use of AI and are accountable for their outcomes is critical.
The OECD’s goal is to help build a shared understanding of AI, and to encourage a broad dialogue on these important policy issues. National policies are needed, inspired by the OECD AI Principles, to promote trustworthy AI systems, including those that encourage investment in responsible AI research and development. Rules and guidelines that enable access to data, alongside strong data and privacy protection, may be required.
In short, policies are needed to address public concerns, while promoting AI so that everyone, not just large firms and administrations, but small and mediumsized enterprises, local authorities and people at home as well, can benefit from it. Our societal intelligence will be a determining factor in shaping better AI policies for better lives. Rory J Clarke
References AI at the OECD: www.oecd.org /going-digital/ai/ Berryhill, Jamie, and Kévin Kok Heang, Rob Clogher, Keegan McBride (2019) “Hello World: Artificial intelligence and its use in the public sector,” OECD Working Papers on Public Governance No 36 See https://dx.doi.org/10.1787/726fd39d-en Hathaway, Claire (2019), “Artificial bias” in OECD Observer No 317-318, Q1-Q2 2019, https://oe.cd/obs/2D7 OECD (2019), Artificial Intelligence in Society, OECD Publishing, Paris, https://doi.org/10.1787/eedfee77-en. OECD Observer (2019), What are the OECD Principles on AI?”in No 317-318, Q1-Q2 2019, https://oe.cd/obs/2DX OECD (2018), “OECD creates expert group to foster trust in artificial intelligence”, news release, 13 Sep, see https://oe.cd/2Ss Or https://www.oecd.org/innovation/oecd-creates-expertgroup-to-foster-trust-in-artificial-intelligence.htm Prigent, Anne-Lise (2019), “Societal intelligence”, in OECD Observer No 317-318, Q1-Q2 2019, https://oe.cd/obs/2D8 For “The telephone is the instrument of the devil”, see Ericsson’s website: https://www.ericsson.com/en/about-us/ history/communication/how-the-telephone-changed-theworld/the-telephone-is-the-instrument-of-the-devilworld/ the-telephone-is-the-instrument-of-the-devil