Weekend Edition Nº208

Page 1


The AI Act within the Framework of Digital Constitutionalism?

The emergence of artificial intelligence (‘AI’) and its spreading across society have prompted constitutional questions about the role of these technologies in influencing the protection of fundamental rights and the exercise of power in the digital environment.2 As AI technologies become more pervasive, primarily in decision-making processes, the questions for constitutional democracies grow exponentially, moving from how to encourage and benefit from the development and use of these technologies to how to regulate in order to mitigate risks for fundamental rights and democratic values.

In this context, the Artificial Intelligence Act (AI Act),3 adopted in July 2024, represents a horizontal effort to address the aforementioned constitutional challenges brought by AI technologies. Its alignment with European values, namely democracy, human dignity, and fundamental rights as enshrined in Art. 2 TEU, makes the AI Act an expression of digital constitutionalism in Europe.4 Indeed, the AI Act can be considered an example of the geometry representing how the protection of fundamental rights and the exercise of powers have been shaped by the dynamics of the algorithmic society.5 It is a way to reframe constitutional principles in a broader networked system,6 where public actors are an essential but not exclusive tile of the mosaic made of transnational corporations and civil society groups which express their governance.7

1. Giovanni De Gregorio is the PLMJ Chair in Law and Technology at Católica Global School of Law and Católica Lisbon School of Law, Lisbon.

Oreste Pollicino is Full Professor of Constitutional Law at Bocconi University, Milan.

2. Hans-Wolfgang Micklitz and others (eds.), Constitutional Challenges in the Algorithmic Society, Cambridge University Press, 2022.

3. Regulation (EU) 2024/1689 of the European Parliament and of the Council of 13 June 2024 laying down harmonised rules on artificial intelligence (Artificial Intelligence Act), OJ 2024 L 1689.

4. Giovanni De Gregorio, ‘The Rise of Digital Constitutionalism in the European Union’, 19 International Journal of Constitutional Law, 2021, p. 41.

5. Oreste Pollicino, ‘The Quadrangular Shape of the Geometry of Digital Power(s) and the Move towards a Procedural Digital Constitutionalism’, 29 European Law Journal, 2023, p. 10.

6. Francisco de Abreu Duarte, Giovanni De Gregorio and Angelo Golia, ‘Perspectives on Digital Constitutionalism’, Research Handbook on Law and Technology, Elgar, 2023.

7. Jack Balkin, ‘Free Speech in the Algorithmic Society: Big Data, Private Governance, and New School Speech Regulation’, 51 UC Davis Law Review, 2018, p. 1148.

The constitutional dimension of the AI Act can be observed by looking

at the focus on European values

This contribution explores the AI Act within the framework of digital constitutionalism. It underlines how the AI Act incorporates European values into the regulation of AI and addresses the constitutional challenges presented by the rise of the algorithmic society. By focusing on its structure based on risk regulation, the AI Act provides a different approach to shaping the relationship between public and private actors. This contribution will examine the broader implications of the AI Act for the governance of AI in Europe.

The AI Act and European Values

The constitutional dimension of the AI Act can be observed by looking at the focus on European values. In its first recitals, the AI Act emphasises the role of European values as enshrined in Art. 2 TEU, thus positioning human dignity, the rule of law, the protection of fundamental rights and democracy at the core of the EU commitment. By embedding these values into the regulation of AI, the EU underlines that the development and deployment of these technologies should not be based on technical standards, or broadly private governance, but on a system of values found in the Treaties and the European Charter of Fundamental Rights.

At the same time, these European values hide a duality. If, on the one hand, the objective is to ensure that overriding reasons of public interest, such as a high level of protection of health, safety, and fundamental rights, are not left behind, on the other hand, the EU has also built its identity based on internal market goals, stressing the need to ensure fundamental freedoms and competition which have played a foundational role in the EU economic integration process since the beginning. The regular reliance on Article 114 TFEU to justify a new regulatory wave for the sake of harmonising the internal market in areas which primarily are related to democracy, as in the case of the European Media Freedom Act,8 demonstrates an increasing convergence between market and democracy in Europe. This European regulatory brutality is not the only source of trouble once one looks ahead to the AI Act’s application,9 but competing European values driven by the rise of European digital constitutionalism are progressively permeating the legal discourse and narrative as in the case of data protection or competition law.10

8. Regulation (EU) 2024/1083 of the European Parliament and of the Council of 11 April 2024 establishing a common framework for media services in the internal market and amending Directive 2010/13/EU (European Media Freedom Act), OJ 2024 L 1083.

9. Vagelis Papakonstantinou and Paul De Hert, ‘The Regulation of Digital Technologies in the EU: The Law-Making Phenomena of “ActIfication”, “GDPR Mimesis” and “EU Law Brutality”’, 2022 Technology and Regulation, 2022, p. 48.

10. Giovanni De Gregorio and Alba Ribera Martinez, ‘Competing European Values under the Spotlight of the AI Act’, Kluwer Competition Law Blog, 27 May 2024.

Likewise, European values are not always translated into the norms of the AI Act. While, as already underlined, the recitals of the AI Act explicitly reference European values and the need for a human-centric approach, these values may be less visible in the practical implementation of the Act’s regulatory provisions. This issue characterised the first proposal of the European Commission,11 which was then enriched not only by the rules on generative AI but also by the introduction of safeguards during the political negotiation,12 such as the Fundamental Rights Impact Assessment (‘FRIA’) and the right to explanation,13 even if still the AI Act does not directly provide judicial remedies. This raises concerns about whether the AI Act’s framework fully integrates European values into the regulatory architecture of AI systems or whether they remain principles shaped by AI developers in the first instance and then by European and national competent authorities following a risk-based approach.

AI, Risks and Fundamental Rights

The AI Act introduces a regulatory framework for AI grounded on a risk-based approach. Such an approach consists of the adoption of a regulatory framework where duties and obligations are scaled and adapted to the concrete risks deriving from a specific activity. The binary logic of compliance/non-compliance is thus overcome by a form of ‘compliance 2.0’,14 where legal requirements are rather tailored to the targets of regulation themselves. In European digital policy, these objectives are primarily related to the protection of fundamental rights and democratic values. One of the most typical structures of the riskbased approach, such as that characterising the GDPR,15 features a mechanism by which risk evaluation and risk mitigation are put in place directly by the targets of the regulation to protect the rights of data subjects, or the Digital Services Act,16 which combines top-down rules with bottom-up risk assessment obligations applying to very large online platforms, in order to protect fundamental rights and democratic values.

The AI Act introduces a regulatory framework for AI grounded on a risk-based approach. Such an approach consists of the adoption of a regulatory framework where duties and obligations are scaled and adapted to the concrete risks deriving from a specific activity

11. Lilian Edwards, ‘Regulating AI in Europe: Four Problems and Four Solutions’, Ada Lovelace Institute, 2022.

12. Philipp Hacker, ‘What’s Missing from the EU AI Act’, Verfassungsblog, 2023.

13. AI Act, Arts. 27, 86.

14. Claudia Quelle, ‘Enhancing Compliance under the General Data Protection Regulation: The Risky Upshot of the Accountability- and Risk-Based Approach’, 9 European Journal of Risk Regulation, 2018, p. 502.

15. Regulation (EU) 2016/679 of the European Parliament and of the Council of 27 April 2016 on the protection of natural persons with regard to the processing of personal data and on the free movement of such data (General Data Protection Regulation), OJ 2016 L 119, p. 1. Raphaël Gellert, The Risk-Based Approach to Data Protection, Oxford University Press, 2020.

16. Regulation (EU) 2022/2065 of the European Parliament and of the Council of 19 October 2022 on a Single Market For Digital Services and amending Directive 2000/31/EC (Digital Services Act), OJ 2022 L 227 p. 1. Martin Husovec, Principles of the Digital Services Act, Oxford University Press, 2024.

The AI Act seems to turn upside down such a perspective by implementing a more clearly top-down form of risk-based regulation.17 From prohibited uses to systems which are subject only to voluntary commitments and codes of conduct, the AI Act addresses concerns that these technologies could be used in ways that harm individuals, such as through biased algorithms that discriminate based on race, gender, or socioeconomic status. In particular, the Act’s ban on unacceptable-risk AI systems like social scoring is a clear reflection of the EU’s desire to protect citizens from AI applications that could erode human dignity or violate fundamental rights. Likewise, the AI Act introduces oversight and accountability mechanisms for AI systems, particularly those used in public sector activities like law enforcement. By regulating real-time biometric surveillance in public spaces without adequate safeguards, the Act limits the use of AI technologies that could lead to privacy violations and surveillance overreach.

This approach oriented to risk and fundamental rights has led to introducing further instruments to manage risks, primarily the FRIA. According to Art. 27 of the AI Act, this instrument is a prognostic evaluation requiring the deployer to consider the impact on fundamental rights that the use of such system may produce.18 Saying it differently, the FRIA, drawing from the previous experience with the Data Protection Impact Assessment (DPIA) under the GDPR, is establishing a mechanism for the deployer to evaluate ex ante, before the AI system is deployed, any risks to fundamental rights. Indeed, the FRIA is an assessment conducted prior to the use of the AI system intended to frame the risks represented by a given AI system. This measure aims to protect individuals from potential harm by moving the assessment of risks into the phase of pre-deployment of AI systems.

17. Claudio Novelli and others, ‘Taking AI Risks Seriously: A New Assessment Model for the AI Act’, Social Science Research Network, 14 May 2023.

18. Heleen Janssen, Michelle Seng Ah Lee and Jatinder Singh, ‘Practical Fundamental Rights Impact Assessments’, 30 International Journal of Law and Information Technology, 2022, p. 200.

The risk-based approach permeating the AI Act reflects a shift from a rights-based approach to a risk-based approach,19 which aims to introduce a flexible system that increases accountability. Risk, in other words, functions as a proxy for an activity, that of the balancing of interests and values, which is intrinsically constitutional by its own nature. Rather than imposing strict obligations, the Union aims to make providers and deployers more accountable by delegating part of the risk assessment and the consequent risk mitigation measures while keeping control over their assessment. Such an approach aims to accommodate, on the one hand, the economy-oriented interest towards innovation and the creation of an internationally competitive digital single market; on the other hand, the often-conflicting interest towards the protection of democratic values and the rights and freedoms of individuals.

In theory, the risk-based approach also ensures a more proportionate framework, considering the possibility of risk assessment, which provides a contextual analysis of compliance. This proportionality is key to balancing the EU’s dual objectives of fostering innovation in AI technologies while maintaining a strong commitment to protecting fundamental rights. It ensures that the most intrusive AI systems are subject to the highest levels of scrutiny, while lower-risk applications are free to develop with fewer constraints. In practice, the risk-based approach raises constitutional questions, primarily connected to the oversight and enforcement of the AI Act. It requires collaboration and trust between public and private actors and inevitably shapes the protection of fundamental rights and the exercise of power. Indeed, this process would require striking a balance between market freedoms, the protection of individual rights, and democratic values.

Enforcing the AI Act

Such complexity can make enforcement more unpredictable. The shift towards European values will require competent authorities to interpret the regulatory framework and to strike a balance between competing constitutional interests. Particularly, the questions around enforcement will be primarily connected to the enforcement of the rules on risk assessment. The different layers of risks, as specified in Annex III for high-risk applications, raise critical interpretative issues with reference to the evolution of different technological applications. Risk indeed is a notion open to possibilities. Unlike traditional legal approaches based on a white-and-black approach shaped by interpretation, risks provide multiple possibilities which could lead to a certain legal consequence. The complexity of enforcing a risk-based approach will be a critical challenge for enforcement authorities, also considering the different approaches to risk followed by their different legal instruments and the interpretation of them by the Court of Justice.

The shift towards European values will require competent authorities to interpret the regulatory framework and to strike a balance between competing constitutional interests

19. Giovanni De Gregorio and Pietro Dunn, ‘The European Risk-Based Approaches: Connecting Constitutional Dots in the Digital Age’, 59 Common Market Law Review, 2022, p. 473.

Since the AI Act does not provide thorough guidance on how to conduct the assessment, as happened for the DPIA, it will be crucial to consult the guidelines and the templates elaborated by the European Authorities. In the case of the FRIA, the AI Act assigns this competence to the AI Office. Hence, the provision of a standardised template for these assessments by the AI Office is a strategic move to ensure consistency and comprehensiveness in the evaluations. This process will likely facilitate a more streamlined and uniform approach across different sectors, making it easier for public actors and organisations to comply with the regulations. Nonetheless, the option of creating standardised templates should be taken with adequate care due to the over-reaching and fuzzy conceptualisation of fundamental rights,20 especially when such standards are created only by engineering and computer science experts, without the assistance of lawyers or experts in the human rights field.

What the European approach has made particularly relevant is the relationship of trust between regulators and stakeholders, which is not only based on a formal compliance mechanism but on accountability, collaboration and trust. The participation and agreement of rules between public and private actors also increase the reactiveness of private actors to implement measures to address disinformation and the acceptance of potential sanctions. Indeed, more dialogue with regulators in the enforcement phase would help to mitigate disproportionate measures as underlined, for instance, by the temporary suspension of ChatGPT by the Italian Data Protection Authority.21 It is not by chance that the Commission has pushed for the AI Pact,22 which is a collective agreement designed to ensure that providers adhere to responsible and transparent standards.

Another central piece of this enforcement puzzle will be codes of conduct. The AI Office is tasked with encouraging and facilitating the development of codes of practice at Union level, which nevertheless remain voluntary tools that aim to keep information on generative AI models up to date in the light of market developments and an adequate level of detail on the content used for training, as well as the identification of systemic risks and procedures for managing those risks. The European Commission’s initiative to establish a Code of Practice for General Purpose AI (‘GPAI’) is not just a regulatory tool,23 but also a crucial step towards collaborative governance in the AI landscape. This voluntary code, in conjunction with the AI Act, aims to establish practical, enforceable guidelines for AI providers while fostering innovation. The collaborative nature of this process is a core feature, distinguishing it from traditional top-down regulatory approaches. The drafting process, running from September 2024 to April 2025, involves stakeholders from various sectors, including AI providers, civil society, industry organisations, and academia. The collaborative process will be overseen by four working groups focusing on areas like transparency, risk assessment, and internal governance for GPAI providers.

20. Nathalie A Smuha and Karen Yeung, ‘The European Union’s AI Act: Beyond Motherhood and Apple Pie?’, Social Science Research Network, 24 June 2024.

21. Oreste Pollicino and Giovanni De Gregorio, ‘ChatGPT: Lessons Learned from Italy’s Temporary Ban of the AI Chatbot’ The Conversation, 20 April 2023.

22. AI Pact, Shaping Europe’s Digital Future

23. ‘AI Act: Participate in the Drawing-up of the First General-Purpose AI Code of Practice’, Shaping Europe’s Digital Future, 30 July 2024.

This trend underlines how, together with other instruments of European digital policy, the AI Act also advances a different approach to enforcement, thus changing the traditional dynamics governing the protection of rights and the exercise of powers in the digital age. This approach leads the AI Act to play a critical role in representing this shift and defines new perspectives for digital governance.

Perspectives

The AI Act highlights the EU’s commitment to embedding fundamental rights and democratic values into the regulation of AI technologies. Through its risk-based framework, the AI Act provides a balanced approach to fostering innovation while ensuring that AI systems are subject to appropriate oversight and accountability. The flexibility of the risk-based approach will be crucial to respond to new developments in AI, while the ongoing refinement of collaborative regulatory instruments will shape the enforcement of the AI Act.

As AI technologies continue to advance, there will be ongoing challenges in ensuring that European values are not just externally regulated but also internally reflected within the operation of AI systems

However, as AI technologies continue to advance, there will be ongoing challenges in ensuring that European values are not just externally regulated but also internally reflected within the operation of AI systems. There is indeed a tension between the Act’s external regulatory safeguards and the need to embed European values more deeply into the design and operation of AI systems. While the Act imposes strong external oversight on high-risk systems, it may not fully ensure that these values are internalised within the systems themselves. This raises important questions about whether AI systems can truly align with constitutional principles if they are not inherently designed to reflect human-centric, rights-protecting norms.

This misalignment is also reflected in the choice of transnational corporations not to invest in the European market or to provide their services in Europe, as the stringent regulatory requirements may discourage businesses which can also lead to other obstacles to European companies. This situation underlines the relevance of further developing systems which allow the mediation of conflicting constitutional interests in the algorithmic society. Indeed, the AI Act serves as a foundational regulatory framework for the governance of AI in Europe, but it is only the beginning of the EU’s efforts to create a human-centric AI ecosystem that respects human dignity, privacy, and democratic governance. In this case, digital constitutionalism provides a viewpoint to understand how the EU approach to AI regulation is reflecting a transformation of European digital policy, and, broadly of the protection of rights and the exercise of powers in the digital age.

Turn static files into dynamic content formats.

Create a flipbook
Issuu converts static files into: digital portfolios, online yearbooks, online catalogs, digital photo albums and more. Sign up and create your flipbook.