CONTESTABLE (AI)

Page 1


Shining a light on AI-based decisions

Artificial Intelligence (AI) tools are increasingly being used to inform decision-making in areas including law, finance and healthcare. The individuals affected by automated decisions have the right to meaningful information about the basis on which they were reached, as well as the right to contest the decision, issues at the heart of Professor Thomas Ploug’s research.

The commercial potential of artificial intelligence (AI) technology is enormous, with applications across many areas, including finance, manufacturing and healthcare. In medicine, AI is already being used in diagnostics and planning treatment, yet the basis of automated decisions should still be made clear to patients under the terms of the EU’s General Data Protection Regulation (GDPR).

“Under article 14 of the GDPR, patients have the right to meaningful information about the logics involved, if they are subject to an automated decision. Under article 22, patients also have the right to express their opinion about being subjected to automated decisionmaking, and to contest that decision,” outlines Thomas Ploug, Professor at the Centre of AI Ethics, Law, and Policy at Aalborg University. As part of his work in a research project backed by the Independent Research Fund Denmark, Professor Ploug is seeking to connect these two rights. “The project is about the right to contest decisions, which needs to be defined more clearly and given more substance,” he says.

Explainable AI

This work relates to the field of explainable AI, in which researchers seek to ensure that meaningful information is provided about the reasoning behind an automated decision. While an AI model can be used to diagnose patients in a healthcare setting, it is typically difficult for computer scientists to explain what happens in that model. “The kind of explanations they can give are not comparable to how a doctor may explain why a certain diagnosis was reached. We have argued that since it’s difficult to arrive at explanations of AI-based decisions, maybe we should change the perspective. Maybe we should see the right to an explanation more in the light of the right to contest decisions,” says Professor Ploug. When people choose to contest a decision, they often want to contest the grounds on which it was reached; patients in this situation should have access to information about the AI model involved, believes Professor Ploug.

Many patients may be willing to accept a decision reached by an AI model, but those who do choose to contest it should have access to relevant information believes Professor Ploug, including information about the data used, model bias and performance, and about the extent of human involvement in the decision-making. Rather than trying to look into what might be described as the black box of an AI system, Professor Ploug suggests that patients should instead have access to this information. “We should understand the right for an explanation in light of the right to contest decisions,” he says. The project team is working to essentially reinterpret the requirement for an explanation, in light of the right to contest decisions. The project’s agenda also includes research into the finance sector, where banks may use AI systems in profiling an individual’s creditworthiness or in certain types of decision-making. In this situation banks also need to provide explanations and give people the opportunity to contest AI-based decisions, while researchers are also considering legal

decisions. “We are looking into asylum systems. If a decision on whether to grant an individual asylum is partly made by an AI system, then that individual also has the right to an explanation and to contest the decision. The information requirements in these different contexts may not be the same,” outlines Professor Ploug. AI technology is already being applied in these kinds of scenarios today, and with new applications emerging pretty much daily, there is a pressing need for effective regulation. “We are becoming more and more aware of the need to develop legislation as new and highly transformative technologies emerge,” says Professor Ploug. “The European Union’s AI Act will be the world’s first comprehensive piece of AI legislation.”

Alongside this work on people’s rights, the researchers are also looking at how these rights can be communicated to the wider public, which Professor Ploug says is an important aspect of the project’s work. “We have all sorts of rights as individuals but a lot of people don’t know about them. And if you are unable to define your own rights then clearly it’s very difficult to act upon them,” he points out.

A greater awareness among the public of the right to contest automated decisions may well then encourage more people to do so. It’s essential in this respect that information is provided in an accessible and concise way which can inform lay members of the public, in contrast to many of the

“ We have argued that since it’s very difficult to arrive at explanations of AI-based decisions, maybe we should change the perspective. Maybe we should see the right to an explanation more in the light of the right to contest decisions.”

AI legislation

This legislation is risk-based, with AI systems classified into different categories, and the information requirements then depend on that classification. While Professor Ploug believes this is a positive step in terms of regulating AI, he says that it should be complemented by a system of rights with the needs of individual people at its heart. “The regulatory relationship is between a national AI authority and companies. We need to get citizens back into the equation. Alongside this AI act, we should also establish rights that enable individuals to participate in the regulation of AI use and development,” he argues. Professor Ploug and his research team is therefore also working on a wider set of individual rights in relation to AI use.

excessively long cookie consent forms on the internet for example. “We would like to provide this kind of information in a manageable way,” stresses Professor Ploug. This could ultimately help boost transparency and enhance public trust in AI as the technology becomes a reality in our everyday lives. “The discussions about the potential of AI and possible negative effects are to some extent being superseded by actual development,” continues Professor Ploug. “The debate now is about where it can be used, for what purposes, and what we want it to do. We’re now moving more towards discussions about where this technology can benefit a particular enterprise or institution, or offer a particular service to clients, citizens, or patients.”

ConTESTAblE (AI)

Contestable Artificial Intelligence (AI)

Project objectives

We propose a substantial interpretation of the EU GDPR stipulation that individuals subjected to automated processing, including profiling, has a right to contest the decision-making. A substantial notion of contestability may inform the technical approaches to AI explainability as well as shed important light on an individual’s GDPR right to “meaningful information about the logic involved” when subjected to automated profiling.

Project Funding Funded by the Independent Research Fund Denmark. Grant ID: 10.46540/202700140B. Contestable Artificial Intelligence (AI) - Defining, evaluating and communicating AI contestability in health care, law and finance.

Project Partners

• University of Aalborg, Denmark (Coordinator).

• University of Copenhagen, Denmark

• Technical University of Denmark, Denmark

• University of Manchester, United Kingdom

Contact Details

Project Coordinator, Professor Thomas Ploug, PhD

A C Meyers Vaenge 15, 2450 Copenhagen SV.

T: +45 31417140

E: ploug@ikp.aau.dk

W: https://vbn.aau.dk/en/persons/ploug

of

Ploug is former member of National Council of Ethics in Denmark (2010-2016), and he is currently member of a Clinical Ethics Committee at Rigshospitalet in Copenhagen. Ploug holds a position as guest professor at Halmstad University in Sweden.

Professor Thomas Ploug, PhD is director
Centre of AI Ethics, Law, and Policy and the Centre of Ethics Education at Aalborg University, Denmark. He heads up the University Research Ethics Committee.
Professor Thomas Ploug, PhD
Professor of data ethics Thomas Ploug has tested Google’s chatbot Gemini. Unlike many critics, his conclusion is surprisingly positive. Photo credit: Rode Joachim/Ritzau Scanpix

Turn static files into dynamic content formats.

Create a flipbook
Issuu converts static files into: digital portfolios, online yearbooks, online catalogs, digital photo albums and more. Sign up and create your flipbook.