AI - A sense of direction for artificial intelligence

Page 1

A sense of direction for artificial intelligence

November 2020


Influencing regulation - now is the time... At the beginning of 2020, Sundar Pichai, CEO of US tech giant Alphabet, made a plea for a regulatory framework for AI. Writing in the Financial Times in January 2020, Pichai highlighted some of the amazing advances AI is bringing to the world, but also the need to be “clear-eyed about what could go wrong”. Fears around deep-fakes, misuse of facial recognition and so on have the potential to breed suspicion and mistrust, cutting off the plant before it has a chance to grow. So regulation is necessary for the safe development of Ai technologies. But what Pichai was asking for were global standards, with international alignment. Of course, this seems entirely reasonable. Technology, perhaps more than any other industry in history, is a global business. While differences in approach and local customs should be recognised, many of the issues are the same the world over. Expecting new products and services to be developed with every different nation’s particular laws and regulatory systems in mind would be absurd – wouldn’t it…? We’ve heard similar voices across nascent technologies. A recent online discussion about the rollout of 5G in the automotive sector, highlighted similar concerns. Different solutions lead to a fragmented market and an inability to scale technologies globally. So what is happening among policy makers, legislators and regulators to provide a harmonised, bespoke regime for AI to evolve in? Given the widespread and justifiable public anxiety about the ethical issues, what are governments doing to let the industry grow while protecting people from overreach and intrusion? We carried out extensive research to assess what governments and other authorities have been doing to meet this challenge. We found that the picture is atomised and conflicting. Efforts at international level have produced some valuable statements of high level principles, but that is about as far as coordination has reached.


Perhaps the European Union has gone furthest in producing a consistent approach across its 27 member states. But overall the picture looks dire. It’s not that nothing is happening. International bodies, governments, NGOs, standards organisations, regulatory bodies. They all seem to have something to say on AI, to have a project in train, a new initiative on the blocks. There’s a well-intentioned cottage industry in selfregulatory codes of conduct. But apart from some overall general themes, there is very little tying this all together. What are developers to do? Stick to their local rules as they evolve and hope for the best at the point of international roll-out? Look for the most stringent system and try to follow that (as many businesses have done in relation to the EU’s GDPR on personal data privacy)?. Or just hide their heads in the sand and hope that when they reach the market they can respond to any regulatory or legal challenges that are thrown their way. Given the state of flux for regulation of AI, there is a real opportunity now to influence the direction that policy takes. Whether through the media or through responses to formal consultations, innovators should make their voices heard at this crucial stage of regulatory development.

This report (and a more detailed version that is available here) gives an overview of our research with AMRC, reviews what we discovered and looks at what we can expect. But, most importantly, also invites you to shape the debate...

Isabel Teare, Senior Legal Adviser

Mark Pearce, Partner

Stephanie Caird, Principal Associate

+44 (0) 1223 222402

+44 (0) 113 388 8264

+44 (0) 1223 222457

isabel.teare@mills-reeve.com

mark.pearce@mills-reeve.com

stephanie.caird@mills-reeve.com


A “Hippocratic oath” for scientists “We need a Hippocratic oath in the same way it exists for medicine, in medicine, you learn about ethics from day one. In mathematics, it’s a bolt-on at best. It has to be there from day one and at the forefront of your mind in every step you take.” Hannah Fry, quoted in The Guardian, 16 August 2019

Article by Ian Sample in The Guardian


What we did We identified and reviewed materials from over 50 different sources, some of which deal with AI at a high level, looking at the ethical problems and setting broad outlines for directions and priorities, and others which home in on specific issues like patent protection and data privacy. We considered the many different legal fields that impact on AI in its various applications, and looked at the effect of existing laws and regulations. We imagined what changes will come next and what good – and bad – might look like. In conclusion, we thought about what is likely to happen in the real world and what innovators can do to prepare for, and where possible, shape the future.


An area of focus

Healthtech and medtech


Our thinking gained focus by looking at one sector in particular – healthtech and medtech. We use these terms to refer to a broad group of technologies that use digital technologies and data in order to improve health and patient care. In our work with clients, both large and small, we see amazing healthtech/ medtech innovations on a daily basis. These offer the potential to generate new medicines through mass analysis of existing data, accelerate and streamline diagnosis, support well-being, patient care and mental health remotely through wearables and apps, and empower healthcare organisations to improve efficiency and cost-effectiveness. Not only is healthtech/medtech an area where innovation is flourishing in so many different ways, it is also subject to stringent regulation and high legal risk.

Data protection and privacy The ethical issues around AI come into sharp focus in this sector. The risk of harm to individuals is high, compared with other fields of technology. Imagine a diagnostic tool that misses an advanced cancer in a radiology image. Or an app for wellbeing that fails to give the right insulin dose to a diabetic patient. Health data is very sensitive – it falls into the highest category of protection under EU data privacy law. Using this kind of data in ways that fail to maintain an appropriate level of protection for individual privacy and rights of control is a flashpoint for many patients. Control of data and the results of research can be a vital resource. If databases that are required to develop a new technology are retained for one user, or available only at high cost, new therapies or diagnostic tools may not become available, or may be very expensive. Should competition law, or intellectual property rules address these issues to enable access to key data resources? The ’black box’ problem: getting approval for a new medical technology often involves a detailed explanation of how it operates. If even the developer does not fully understand how an AI system is analysing scans, for example, does it reach the bar for approval and certification? We worked with colleagues at the Association of Medical Research Charities to better understand these issues, and to develop our thinking, and are grateful to their input. Focusing on healthtech/medtech doesn’t always make sense. Most of the work we found in our research is sector-agnostic and will apply, more or less, to any field of application. But it is nevertheless useful when thinking through the issues to tie them to practical topics that we come across day to day.


Putting the cart before the horse? “There is an assumption that AI somehow replicates or mimics the behaviour and capabilities of the human brain. This is not true any more than that the car resembles the mechanism or behaviour of a horse. The car still revolutionised transport and took the place of the horse for the majority of transport tasks, however. AI is just another form of mechanisation, applying tools and techniques to data instead of physical machinery. Just as the car required new legislation, so will the introduction of AI and machine learning algorithms to our society.� Ed Bullen, Telstra Purple


What we found... Innovators doing it for themselves We found many really quite inspiring efforts to build a self-regulatory approach to AI innovation. Individual businesses, coordinated groupings and not-for-profit organisations have done some hugely valuable work in thinking through the ethical concerns and how they might be addressed. This approach has four important benefits: It is closely tied to what innovators are actually doing and so does not try to build in impractical solutions to problems. It is responsive and up to date. It is often built on an international basis – innovators are rarely confined by national boundaries in their approach to a problem. It has the potential to build trust among consumers that may not follow where developers are seen to be dragged to compliance reluctantly. So far so good. But it is certainly not the case that all AI innovators are willing to sign up to voluntary controls on their work. And they are unlikely to agree on the best approach. Players left outside the group may get ahead because of their freedom to ignore the rules – meaning that ‘bad’ operators can gain at the expense of the ‘good’.


Tinkering with what we already have A second approach to the problem is to tweak and tailor existing laws and regulations so that they fit – perhaps not perfectly, but well enough. We identified at least ten existing areas of law that already have an impact on AI. From cyber security to privacy, from contract to competition, the law already has a lot to say on the subject. Take racial bias in training data that leads to unfair discrimination in the provision of services, for example. This can already be addressed in many parts of the world through equalities legislation. Some development and tinkering may be needed to address loopholes and specific concerns. But why reinvent the wheel when a lot of the issues that we are trying to combat are already covered to some extent by existing legal structures.

Fast fashion or a tailored suit? This makes a lot of sense, and like self-regulation, offers a quicker response than starting afresh. Unlike self-regulation, there are real teeth available in the form of legal sanctions for failure to comply. It has the attraction of consistency with the law as it applies to other fields of business activity. Less attractive to industry, however, will be the poor fit with technology. EU data privacy law gives individuals a right to ask for their data to be erased in certain situations. Removing information about one individual from a large dataset is likely to cause real headaches for owners. Legal structures have been built up through evolution over many years, and vary widely between countries. Even within the EU, where heroic efforts are constantly made to harmonise laws, the disparities can be extreme. Only in the last few years has the protection of trade secrets received a degree of coordination between countries. And because existing laws have been built up in this gradual way, it is difficult and burdensome for innovators, especially the smaller ones, to understand and navigate what looks rather like a tangled thorn bush of different laws.


A blank sheet of paper What Pichai seemed to be calling for in his Financial Times article was a bespoke regulatory system developed to fit both the ethical problems and the practical possibilities of AI innovation. The blank sheet of paper is indeed appealing, and countries around the globe are taking up their pencils to begin sketching out a new structure. China’s National New Generation Artificial Intelligence Governance Professional Committee issued the New Generation of Artificial Intelligence Governance Principles - Development of Responsible Artificial Intelligence in June 2019. This proposes a framework and action guide for artificial intelligence governance. The EU’s AI policy project (see for example the EU Commission White Paper On Artificial Intelligence - A European approach to excellence and trust) combines proposals to adjust existing laws with a new framework. These projects take time, however. Especially where a degree of international coordination is built in, it can take many years to develop a structure, stress-test it, and get it through the necessary law-making processes. By which time, of course, things will have moved on again, and it may already be looking dated. Something new and bespoke will attract input and comment from many different stakeholders, all keen to see their priorities reflected. Balancing these conflicting voices can lead to something which doesn’t work well in practice and is unsatisfactory to everyone.

Building sandcastles One approach that can support the development of regulation without getting in the way of innovation is the use of “regulatory sandboxes”. These are widely used by regulators in innovative sectors to develop new regulatory approaches while trialling novel products and services. The United Kingdom’s Information Commissioner, for example, runs a regulatory sandbox project to support the creation of products and services using data in innovative ways. Where individual countries decide to plough their own furrow, local regimes can develop that are out of line with the rest of the world. The desired harmonisation and economies of scale are not achieved and geographies end up competing in their attractiveness to business.


Where next? We have looked at the various different approaches being favoured and pursued around the world, and highlighted some of the advantages and disadvantages of each. We’d love to see a real sense of progress towards harmonisation both in approach and in the detail in regulating AI. We think this would enable innovators to work faster and achieve greater efficiencies in bringing AI-based innovations to people in a safe and understandable way. But as we all know, even for the most important issues facing the world, agreeing a coordinated approach is really, really hard. It requires political will to work internationally, put aside individual national interests where these conflict with wider goals and willingness to focus on making progress rather than posturing for a national electorate. So in reality, the desire of innovators and developers to see a coherent and harmonised framework within which to work is unlikely to be rewarded.

So what can we hope for, or indeed expect..?


We can expect gradual, but useful development at a detailed and technical level We can expect strong coordination within single areas of effort. Technical standards organisations like ETSI are developing joint approaches in areas such as cybersecurity for AI systems. Medical device regulators are looking for shared approaches to increasingly advanced software-based medical devices, with the International Medical Device Regulators Forum currently addressing the ‘black box’ element of AI technology – is it appropriate to assess output without fully understanding the decision-making process? – and the need for mechanisms to approve change for an algorithm that relies on continuous learning. The World Intellectual Property Organisation, for example, is bringing together representatives to hammer out shared approaches to patent and copyright protection. Real coordination seems to be a way off, but understanding how others see these issues is helping to inform national developments.

We can expect to see headline grabbing moves by individual governments or groupings The United States Department of Justice has recently filed competition law charges against Google as ‘gatekeeper to the internet’, following similar action in the EU in recent years. Individual action by governments to tackle standout situations can help to shape the law and drive change. We also expect to see some jockeying for position between major industrial nations to offer the best system. Both the US and China see themselves as leaders in AI, and seem to be pushing approaches that will favour home-grown industry. The EU is promoting an ethics and values approach likely to be influential and attractive to the consumer voice as the GDPR has been. A ‘me first’ approach is attractive to policy-makers seeking to please the home crowd, but industry voices are clear in the need to achieve coordination so that industry can scale solutions effectively.


We can hope for more high-level agreement on shared priorities and goals This is unlikely to happen soon – it takes a better alignment of the political planets and a bit more space to think than we currently have – but we can hope for a renewed sense of international cooperation to break through. If this can take forward the early steps made during the 2018 Canada G7 meeting (a common vision for the future of AI) and the 2019 G20 meeting in Japan (agreed principles and recommendations) then serious, coordinated progress on the best next steps for all of us may result.

An opportunity to shape the debate Regulation will come, and this is a key time for innovators to make their voices heard. Engaging with formal consultations (like the UK IPO’s current consultation on intellectual property rights and AI) and developing policy proposals (like the European Parliament’s legislative initiative for AI regulation and liability) will influence what happens next. This is the right time to get involved – once the laws and regulations are written, changing them will be much harder.

Contributing authors: Mark Pearce, Partner, Mills & Reeve Stephanie Caird, Principal Associate, Mills & Reeve Grace Melvin, Association of Medical Reserch Charities (AMRC)


Get involved...

Download our more detailed report here Talk in more detail with one of our AI specialists Register your interest for updates and events - email mia.church@mills-reeve.com


Turn static files into dynamic content formats.

Create a flipbook
Issuu converts static files into: digital portfolios, online yearbooks, online catalogs, digital photo albums and more. Sign up and create your flipbook.