POSITION | DIGITAL POLICY | ARTIFICIAL INTELLIGENCE
Towards innovation-open AI-regulation
Evaluation of key industry-relevant elements for the trilogue negotiations on the Artificial Intelligence Act (“AI Act”)
![](https://assets.isu.pub/document-structure/230921080212-7b287963c715de293c86799015beee89/v1/65ccf9be220d8089b0c1acb384e51c7e.jpeg)
![](https://assets.isu.pub/document-structure/230921080212-7b287963c715de293c86799015beee89/v1/6a736caa69bb74530710be256192af4a.jpeg)
19 September 2023
Increasing Productivity, Preserving European Values – the EU AI Act as the World's First General Regulation of Artificial Intelligence
Artificial intelligence will probably influence the entire industry and find applications in most areas of business. AI using technologies may lead to significant productivity gains, help achieve climate targets and combat the shortage of skilled workers. It is therefore all the more important to regulate the uses of the technology in a measured way with little administrative burden, so as not to hinder companies in the application, development and monetizing of AI systems. In devising this legal framework, one key strategic aim should be to improve Europe’s industrial competitiveness by developing a thriving European AI ecosystem that can hold its own on the global stage. Therefore, it is crucial to achieve a fair balance by preserving our EU values and facilitating innovation in the AI Act while preventing overregulation and obligations that could hamper the adoption of AI and innovation in Europe.
In April 2021, the European Commission presented the AI Act, a comprehensive proposal for specifically regulating artificial intelligence. This proposal follows a holistic approach to technology regulation. One of the main goals is to categorize AI systems using a single, overarching regulatory framework. Depending on their risks, these systems should be subject to specific requirements and regulations. At the end of 2022, the Council of the European Union agreed on its general approach; in June 2023, the EU Parliament adopted its negotiation position on the AI Act. In light of the ongoing trilogue negotiations between the co-legislators, German industry below details which proposals it would prefer to see incorporated into the final version of the AI Act. These recommendations aim to support the competitiveness of EU based companies while safeguarding the rights of citizens at the same time.
Nota bene: This paper takes as a baseline the assumption that only those points that have been raised by any of the three co-legislators have a chance to be included into the AI Act’s final wording. Therefore, we do not flag again those points that we would have appreciated to be changed but rather compare the available three options and outline our preferred one. If none of the proposal texts is somehow acceptable from an industry perspective, we likewise express and justify this position. BDI welcomes the EU's risk-based regulatory approach, but stresses that the criticality pyramid must be maintained in this horizontal set of legislation. Furthermore, we would like to point out the overlaps and occasional contradictions in the AI Act with existing and forthcoming legislation, such as the DSA, Data Act, GDPR, Cyber Resilience Act and sectoral safety regulations. Such contradictions and overlaps should be rectified. Lastly, we would like to point out the need to concretize role definitions along the AI value chain and to establish an even distribution of burdens. Otherwise, there is a risk of harmful ambiguities and compliance difficulties.
Definition of AI (Article 3) preferred approach: Council of the EU/ European Parliament
The practice-oriented definition of artificial intelligence allows its legally compliant use in industry. For internationally active companies it is paramount that the definition underpinning the EU AI Act is compatible with those utilised by multilateral organisations, such as the OECD. In any case, it is particularly important that an AI system is narrowly defined as a machine-based system with learning capability that is designed to be used with elements of autonomy. Therefore, BDI welcomes the orientation of the legislators towards the specifications of the OECD and the decision of the EU Parliament to modify the original definition of the EU COM but suggests including the council-wording “elements” instead of “levels” in relation to the autonomy of an AI-system. An additional clarification of the term "autonomy" as well as a reference to approaches of machine learning are also necessary from our point of view. Furthermore, the European Parliaments proposal is so broad that many IT solutions are "AI systems". The definition in Art 3(1) only makes sense with the limitations in Recital 6, as any computer program that produces any output is covered. We plead for this crucial delimitation not only in a Recital, as they do not have the necessary binding force legally. Therefore, the limitations of the Recital should also be reflected in the definition itself. In addition, the term "modelling capabilities" should be deleted from Recital 6 since this applies to numerous "conventional" computer programs.
High Risk Systems
It is crucial to only classify systems as high-risk if they truly pose a significant threat to health, safety, or fundamental rights, rather than regulating technology as such. Manufacturers of high-risk systems are subject to numerous obligations, so it would be disastrous to incorrectly categorize AI systems as high-risk when they do not actually pose a significant threat.
Classification rules for high-risk AI systems (Article 6)
preferred approach: European Commission/ European Parliament
With respect to obtaining the criticality pyramid for risk classifications of AI systems, this article should specify that AI systems that are also subject to Annex II regulation (Article 6(1)), or which function as a safety component for products under Annex II should only be classified as high-risk AI under specific conditions. AI systems that directly impact health and safety requirements and necessitate Third-Party Certification under Annex II regulation should be classified as high-risk AI only if the AI has a direct influence on the safety-relevant elements of the system that are the cause of the third-party obligation of this product.
High-risk AI systems referred to in Article 6 (Annex III)
preferred approach: Council of the EU/ European Parliament
Regarding the in Article 6(2) regulated AI systems falling under one or more of the critical areas and use cases referred to in Annex III, BDI prefers the Councils wording, as it gives the most clarity regarding the criteria for classification as a High-risk System, as well as the overview of implementation by the Commission. Additionally, we want to highlight the positive addition by the Council that stipulates that AI systems listed in Annex III are not considered high risk if they are purely accessory and therefore unlikely to lead to a significant risk to health, safety or fundamental rights. The addition of supplementary criteria for classifying an AI system as high risk, as proposed by the European Parliament, only makes sense if it can be applied in an industrial environment. In this context, the definition of a "significant risk" would be insufficient without further clarification, especially for AI systems in production and manufacturing. This is why we welcome the linking of the concept of high-risk to the significance of harm by the European Parliament' proposal and the clarification of this term aimed for in the form of guidelines. The institutional involvement of industry in the corresponding consultation process is
essential. However, we are concerned by the Parliament’s proposal to establish a mechanism by which providers of AI systems covered by Annex III that do not pose a significant risk must notify relevant regulators and wait for their confirmation that the system has not been misclassified. This would create significant administrative burdens for regulators and providers and would hinder European innovation. Moreover, Article 6(2) should be modified to ensure that EU harmonization rules in Annex II would be sector specific, so that certain industries that are already tightly regulated (i.e. medical devices, aviation) can be expected to have sectorial treatment under respective sectorial legislation.
Compliance with the requirements (Article 8)
preferred approach: European Commission or Council of the EU
German industry strongly objects to the amendment to this article introduced by the European Parliament, as it could lead to an interference with existing Union Harmonisation Legislation and a bigger scope of overlaps and regulatory inconsistencies, particularly regarding the interplay with NLF. In view of far-reaching overlaps with other EU legislation such as the Data Act and the GDPR, such a reference creates additional bureaucratic hurdles that further reduce the practicality of the AIA. Therefore, German industry prefers the proposal of the EU COM or the Council of the EU.
Risk management system (Article 9)
preferred approach: European Parliament
BDI appreciates the clarification of the risk management procedures for High-Risk Systems contained in the text proposed by the European Parliament. We also welcome the reference to existing risk management systems. However, we are concerned by the Parliament’s proposal for additional requirements regarding potential risks to fundamental rights as part of the risk assessment, as it is unclear how such a broad and undefined requirement would be meaningfully evaluated or mitigated.
Data and data governance (Article 10)
preferred approach: European Commission or Council of the EU
Data governance is central to the security architecture of high-risk AI systems, but it also poses significant challenges for companies. The original Commission proposal included in Art. 10(5) the possibility to process special categories of personal data when strictly necessary for the detection of bias monitoring, detection and correction. This possibly, confirmed by the Council General Approach, would be limited to very strict circumstances, and goes in the right direction in ensuring that AI providers and deployers have the necessary tools to detect unintended bias.
Record-keeping (Article 12)
preferred approach: European Commission or Council of the EU
We object the addition introduced by the European Parliament in paragraph 2a, which provides for the "recording of energy consumption, the measurement or calculation of resource use and environmental impact of the high-risk AI system during all phases of the system’s lifecycle." Although German industry recognizes that considerations of environmental issues in the regulation of technology is necessary given the occasionally significant energy consumption of AI systems, the energy footprint of AI systems should only be the responsibility of the relevant environmental regulation.
Human Oversight (Article 14)
preferred approach: European Parliament or Council of the EU
In certain use cases, direct human oversight is not possible, because it might be in conflict with ergonomic requirements, processes are too fast, or a human intervention might be even dangerous. For using AI in highly automated or even fully autonomous use cases, the obligation to ensure human oversight might constitute a de-facto ban. The Parliaments text takes this into account and clarifies in Art. 14(3) and 14 (4)(e) that the Human Oversight has to be adapted to context and level of automation
The idea to ensure investigation in Article 14(1), however, is rejected as this obligation does not fit into the ex ante logic of this article and thus introduces a new area of responsibilities for companies. The text of the Council has the advantage for the industry that it softens the Commission's approach to human oversight to some extent, without extending the obligations as proposed by the Parliament, which is why we also find it acceptable.
Cooperation with competent authorities (Article 23)
preferred approach: European Parliament’s Compromise Agreement
German industry welcomes the additional obligations for deployers and users of high-risk applications added to the text by the European Parliament's proposal as the distribute the responsibility along the entire AI value chain. However, only the national competent authorities should have the right to make a reasoned request, and not the European Commission as external access to the data of companies that are classified as providers or deployers must be as limited as possible so as not to jeopardize business secrets, data protection and intellectual property rights. Moreover, we welcome the inclusion of trade secrets by the European Parliament's proposal text.
Obligations of users of high-risk AI systems (Article 29)
preferred approach: European Parliament’s Compromise Agreement
By mentioning the role designation "deployer", the European Parliament has expanded the responsibilities along the AI value chain to include an intermediate position between the manufacturer and user of AI systems. BDI advocates a balanced distribution of responsibilities between the various role definitions in the AI Act. A newly introduced role such as that of the deployer should not have disproportionate obligations imposed on them that cannot be implemented in practice, so that there are no significant delays in approval processes. For example, it is sometimes technically impossible for industrial deployers of AI systems classified as high risk to implement the measures required for human oversight to the extent that the use of these AI systems is worthwhile in practice. The same applies to the documentation requirements of “any reasonably foreseeable malfunction” outlined in paragraph 5. Where users use software from a 3rd party service provider and use the system with their own brand, there should be no extension of AI Act responsibilities from the providers to users.
Fundamental rights impact assessment for high-risk AI systems (Article 29a)
preferred approach: none
BDI welcomes the specific provision for SMEs in Article 29a(4). Nevertheless, in our understanding, a fundamental obligation for deployers to carry out a Fundamental rights impact assessment (FRIA) represents an unjustified market entry barrier for AI applications. There is no added value since assessing fundamental rights would be already part of the risk management system according to the EP’s prpopslal in Art. 9(2)(a). We therefore reject Article 29a introduced by the European Parliament and call for its deletion or clarification.
Foundation Models and Generative AI
The rapid proliferation of chatbots and AI-generated image processing programs has greatly increased public and political awareness of Large Language Models. In the EU COM proposal, Generative AI is mentioned only indirectly with respect to pre-trained AI systems. Although the Council’s text takes up General Purpose AI (GPAI) in Articles 3 and 4, it is only the Parliament text that provides a comprehensive regulatory proposal for Big Language Models by distinguishing Generative AI, GPAI, and Foundation Models. BDI generally welcomes the benefits for SMEs and start-ups listed in Article 28a. However, we advocate that larger companies should also be protected from situations where the bargaining power of the party that unilaterally imposes an unfair contractual term results in the situation that the other contracting party is not able to influence its content despite an attempt to negotiate it.
Obligations for Providers of Foundation Models (Article 28b) preferred approach: none
While regulating AI tools lacking specific purposes contradicts the risk-based approach of the AIA, it is evident that a suitable strategy must be formulated in the final version to address these AI systems.
Our standpoint aligns with the risk-based approach of the AI Act and its emphasis on intended purpose, advocating that the responsibilities of providers should be attributed to the entity determining an AI system's intended purpose. There must be fair allocation of responsibilities throughout the value chain, adhering to the flexible framework of the AI Act. Requirements placed upon foundation model providers should consider relevant existing legislation and the technological context of the AI value chain.
Therefore, a number of suggested requirements put forth by Parliament in article 28b are either not practically achievable or unreasonably encumber providers of models. In the specification of the application-related risk assessment by the developers of Foundation Models, the Parliament's proposal is to be modified in such a way that Article 28b(2)(a) and (b) only refers to risks that can already be concretely identified and are known. The foundation model provider does not and oftentimes cannot reasonably know the risks of concrete AI use, therefore the foundation model provider cannot technically mitigate these unknown risks, even if she/he wanted to. Article 28b(2)(a) should therefore be deleted.
Article 28b(2)(e) passes on the responsibility of fulfilling provider obligations to a foundation model provider in the supply chain which can hardly work in practice, because the foundation model provider is much further away from the concrete purpose of use (to be determined by the provider) and usually cannot oversee the individual provider obligations, since there might be an unlimited number of applications. However, there should be no "monopolization" of relevant information at the provider that prevents the fulfillment of High-Risk Obligations for other actors in the value chain, which is why we advocate for limited information obligations for instance in the area of explainability requirements. Furthermore, Article 28b(2) requires the early assessment of the impact of Foundation Models on democracy, the rule of law and the environment. Generally, it should be up to governments, not enterprises, to define and evaluate risks in relation to democracy-related concepts. Inclusion of such concepts would also result in impractical compliance burdens, escalate costs and complexity for AI developers, and impede innovation in these models. These requirements cannot be implemented in the current wording and should either be deleted without replacement or replaced by precise, clearly delimited requirements that can be applied within the framework of the complexity of the technological systems.
Article 28b(4) refers to the compliance of providers of Foundation Models with the intentions to generate “complex text, images, audio, or video (‘generative AI’) and providers who specialize a foundation model into a generative AI system” with transparency obligations in Article 52 (see below). The
foundation model provider can only support with adequate transparency on the model architecture. However, without knowing the concrete use cases, the foundation model provider cannot itself bear responsibility for the requirements of Article 52. The resolution of the tension between freedom of expression and prohibited content in Article28b(4)b is very much context dependent can only be resolved in the concrete case of application, taking into account all the circumstances of the individual case. A foundation model that does not even know the concrete case of application (because this must first be defined), let alone the concrete context of application, cannot fulfil these requirements in a meaningful way.
Article 28b(4)(c) contains further obligations for providers of foundation models used in generative AI. to make publicly available a summary of the use of training data protected under copyright law. We strongly urge legislators to remove these requirements as they are technically unfeasible, very burdensome, or are already regulated. E.g., there is no legal gap for copyright protection in relation to text and data mining (TDM) that warrants the imposition of new disclosure requirements in the AI Act. The EU’s 2019 Copyright Directive already provides an option for rights holders to opt out of TDM, and (if not opted-out) on how TDM is acceptable without explicit permission.
Transparency obligations for certain AI systems (Article 52)
preferred approach: European Commission or Council of the EU
Transparency requirements for AI-generated content can strengthen societal trust in technology and are welcomed by the BDI. However, BDI views critically the unilateral obligation of AI system users to label their AI products, as well as the requirement added by the European Parliament to include text products in the labelling obligation, since it is unclear how the labeling of texts (co-)produced by AI is to be implemented by the users of AI systems in practice. Furthermore, our concern is that, although it is believed that these amendments target chatbots, they may be misunderstood and applied to all AI-assisted decision-making. This would significantly hinder numerous industrial applications. Labelling of AI-generated content should remain limited to so-called deep fakes, provided the safeguards and exemptions introduced by the Council to keep such an obligation workable, are maintained.
Harmonized enforcement (Article 59)
preferred approach: European Parliament’s Compromise Agreement
To prevent fragmentation of national authorities, we appreciate that the European Parliament calls on member states to designate a supranational supervisory authority. Thus, we support the European Parliament’s proposal for harmonized enforcement.
Entry into force and application (Article 85)
preferred approach: Council of the EU
36 months is the absolute minimum time frame in which the very complex regulatory requirements introduced by the AIA can be implemented by both companies as well as market surveillance authorities.
Imprint
Bundesverband der Deutschen Industrie e.V. (BDI) / Federation of German Industries
Breite Straße 29, 10178 Berlin
www.bdi.eu
T: +49 30 2028-0
EU Transparency Register: 1771817758-48
Editor
PolinaKhubbeeva
Senior Manager Digitalisation and Innovation
T: +49 30 2028-1586
p.khubbeeva@bdi.eu
BDI document number: D1818