7 minute read

IA Y VALORES

HARNESSING THE POWER OF AI

Harnessing the power of AI:

regulating innovation and managing risks in the EU

AI applications as the tech challenge of the century

Matteo Mannino
Policy Officer, European Commission, DG for Communications Networks, Content and
Technology DG CNECT, European Commission

As AI systems become more advanced and pervasive, it is crucial to address these risks and establish robust regulations to ensure that AI is developed and deployed responsibly

The widespread adoption of Artificial Intelligence (AI) systems has the capacity to yield positive outcomes for society, stimulate economic expansion, and bolster the EU’s capacity for innovation, thereby enhancing its competitiveness on a global scale. However, it is widely recognised that certain attributes inherent in select AI systems give rise to concerns, particularly pertaining to the areas of safety, security, and safeguarding fundamental rights.

Ever since the proposal of the Artificial Intelligence (AI) Act by the European Commission in April 2021, the EU policymakers have tried to put forward a coordinated regulatory approach to capture the potential of AI, unlocking the opportunities that many use case of this technology can offer to advance the digital economy and society as a whole, while mitigating the risks that may arise from its applications. Once adopted, the rules of the AI Act will be pan-European: applicable and binding throughout the European single market, in the block of 27 Member States. All companies offering AI systems in the EU, independently of whether these providers have established their base in the EU or in a third country.

Under the ordinary legislative procedure, negotiations are in the hands of the colegislators, European Parliament and the Council of the European Union. Whist the Council had already adopted its common position (general approach) in December 2022, the
Parliament has voted in plenary and approved its own negotiating position on 14 June 2023. The ambition is to reach an agreement in negotiations and adopt the bill by the end of the year.

The AI Act defines use cases, not models or tools, and regardless of the specific technology employed, according to a principle of technology neutrality which promotes flexibility, innovation, and adaptability, allowing for a level playing field where different AI
technologies and approaches can coexist, compete, and improve over time.

The Act defines different categories of AI systems based on their potential risks, ranging from unacceptable risk systems to minimal risk systems. High-risk AI systems, such as those used in critical infrastructure, transportation, or law enforcement, are subject to stricter requirements, including conformity assessments, documentation, and appropriate levels of human oversight. Under the AI Act, systems with an unacceptable level of risk to people’s safety will be strictly prohibited, including systems that deploy subliminal or purposefully manipulative techniques, exploit people’s vulnerabilities or are used for social scoring (classifying people based on their social behaviour, socio-economic status, personal characteristics). Among its proposed amendments, the Parliament extended the list of high risk systems to include harm to people’s health, safety, fundamental rights or the environment, as well as AI systems to influence voters in political campaigns and in recommender systems used by social media platforms.

Regulators must ensure a level playing field and promote open access to AI technologies

“General purpose AI” will be regulated under the AI Act currently in negotiations, as generative AI applications will have to comply with additional transparency requirements, like disclosing that the content was generated by AI, designing the model to prevent it from generating illegal content and publishing summaries of copyrighted data used for training. This comes as a regulatory answer to the steep increase in popularity at global level of chat GPT-3, the most famous model of generative AI developed by OpenAI, which has transformed language translation into language production, being now able to converse with humans, delivering human-like output which is generated on the basis of the sequential element which are identified in a text given as input. GPT stands for Generative Pre-Trained Transformer. At its core, the text generated is the statistically most likely sequence of words which follow from the input, based on patterns and statistical associations that the model learns during a training with a vast amount of data. As such, the test generated may be coherent and contextually relevant, but it remains the result of an algorithm which operates on the basis of the likelihood of a possible output and which, unlike human generated responses, does not fully grasp the meaning and the context of a given input.

The ontological status of the replies created by generative models imply a series of risks, ranging from concerns about privacy and data protection to the potential for bias, discrimination, and, in case of misuse of AI as a learning tool, even limitations in they way we process information. As AI systems become more advanced and pervasive, it is crucial to address these risks and establish robust regulations to ensure that AI is developed and deployed responsibly.

One of the key risks that AI poses to society is the erosion of privacy. AI systems often require vast amounts of data to operate effectively, and this data can include personal and sensitive information. Without proper safeguards, this data can be misused, leading to unauthorised surveillance, identity theft, or manipulation of individuals and their preferences. With the aim of mitigating the risks arising from the collection and use of personal data by AI systems, preventing the misuse of personal data and protecting individuals’ privacy rights in the context of AI applications, the AI Act imposes obligations on AI system providers to comply with data protection and privacy regulations.

Moreover, AI algorithms can perpetuate biases and discrimination if they are trained on biased datasets or if they learn from biased human behaviour. This can result in unfair treatment of certain groups of people, exacerbating existing societal inequalities. Ethical considerations should also be integrated into the development and deployment of AI, promoting fairness and inclusivity. In this context the European Commission has been actively involved in promoting trustworthy AI and establishing ethical guidelines for AI development and deployment, for instance by releasing the “Ethics Guidelines for Trustworthy AI” in April 2019.

By establishing a global dialogue on AI governance, policymakers can work so that AI benefits society as a whole

Furthermore, there are broader societal risks associated with AI, including the concentration of power in the hands of a few dominant tech companies or governments. The development and deployment of AI should not be monopolised by a select few entities, as this can stifle innovation, limit competition, and undermine democratic values. We cannot afford to leave the development of AI to any single polity, be it a large technology company, an academic endeavour, or a government. Regulators must ensure a level playing field and promote open access to AI technologies, encouraging a diverse ecosystem of AI developers and users.

As highlighted by Henry Kissinger and Eric Schmidt in their book “The Age of AI: And Our Human Future ”, in order to effectively regulate AI and mitigate its risks, collaboration between governments, industry experts, and civil society is crucial. International
cooperation can foster the exchange of best practices, harmonise standards, and address cross-border challenges associated with AI. By establishing a global dialogue on AI governance, policymakers can work towards ensuring that AI is developed and deployed in a manner that benefits society as a whole. The EU’s AI Act is a step in the right direction, but ongoing efforts are needed to develop comprehensive and internationally coordinated regulatory frameworks that can effectively manage the risks and maximise the benefits of AI for humanity.

This article is from: