Comments on the European AI Act_2022

Page 1

POSITION | DIGITAL POLICY | ARTIFICIAL INTELLIGENCE

European AI Act German industry’s comments on the DRAFT REPORT of the Committee on the Internal Market and Consumer Protection (IMCO) and the Committee on Civil Liberties, Justice and Home Affairs (LIBE) 13 June 2022 Executive Summary Artificial Intelligence (AI) is one of the most important key technologies in the industry. Therefore, unbureaucratic and innovation-friendly framework conditions for the use of AI are a key prerequisite for securing the innovative capacity and competitiveness of the European industry in the long term. As stated in our comments on the proposal for harmonised rules on artificial intelligence, presented by the European Commission in 20211, we welcome the risk-based approach of the legislative proposal and the focus on AI systems that can be associated with high risks. However, in our view, there is still considerable need for amendments in many areas of the AI Act, in order to secure the innovative capacity and competitiveness of companies in Europe. Against this background, the BDI welcomes that the IMCO/LIBE draft report includes numerous amendments that aim in a right direction. As outlined in detail in chapter I of this document, the draft report contains, for example, different amendments that improve the legal clarity in particular areas of the AI act, provide for a stronger participation of relevant stakeholders, and facilitate a uniform application ot the regulation within the EU. Nevertheless, we also see the need for significant changes with regard to the IMCO/LIBE draft report, in particular in the following areas: ▪

Definition of ‘AI system’: The draft report basically retains the too-broad definition of AI, proposed by the European Commission. This definition would undoubtedly mean that conventional software and algorithms would also fall within the scope of the regulation. This would impose completely disproportionate burdens on many businesses. To ensure a specific and clearly delimitable scope of the AI Act, German industry urges the co-legislators to adopt a much more precise and narrow definition of AI.

Scope of the category ‘high-risk AI systems’: The scope of the high-risk category should be limited to AI systems that are undoubtedly associated with high risks and not covered by existing European and national product safety regulation. With regard to the amendments included in the IMCO/LIBE draft report, we reject the addition of the sector ‘internet’ to Annex III of the regulation, which is not justified in our view.

Responsibilities along the supply chain: From an industry perspective, there is still the need to further clarify responsibilities along the supply chain and to delineate responsibilities more clearly from each other, e.g. with regard to Article 24 (Obligations of product manufacturers)

1

https://english.bdi.eu/publication/news/proposal-for-a-regulation-on-artificial-intelligence/

Oliver Klein | Digitalisation and Innovation | T: +49 30 2028-1502 | o.klein@bdi.eu | www.bdi.eu


European AI Act

2


European AI Act

Table of Contents Executive Summary ............................................................................................................................ 1 I. Comments on selected amendments of the IMCO/LIBE DRAFT REPORT ................................. 4 Article 1 – Subject Matter ...................................................................................................................... 4 Article 2 – Scope ................................................................................................................................... 4 Article 3 – Definitions ............................................................................................................................. 5 Article 10 – Data and Data Governance ............................................................................................... 7 Article 16 – Obligations for providers of high-risk AI systems ............................................................... 7 Article 23 – Cooperation with competent authorities ............................................................................. 7 Article 28 – Obligations of distributors, importers, users or any other third-party ................................. 8 Article 29 – Obligations of users of high-risk AI systems ...................................................................... 9 Article 40 – Harmonised standards ....................................................................................................... 9 Article 41 – Common specifications .................................................................................................... 10 Article 44 – Certificates ....................................................................................................................... 10 Article 48 – EU declaration of conformity ............................................................................................ 10 Article 49 – CE marking of conformity ................................................................................................. 11 Article 56 – Establishment of the European Artificial Intelligence Board ............................................ 12 Article 57 – Structure of the Board ...................................................................................................... 12 Article 58 – Tasks of the Board ........................................................................................................... 13 Article 59 – Designation of national competent authorities ................................................................. 13 Article 68 – Formal non-compliance .................................................................................................... 14 Article 70 – Confidientally .................................................................................................................... 14 Article 71 – Penalties ........................................................................................................................... 15 Annex III – High-Risk AI systems refered to in Article 6(2) ................................................................. 15 II. Key messages on issues not addressed in the report .............................................................. 16 Article 4 – Amendments to Annex I ..................................................................................................... 16 Article 6 – Classification rules for high-risk AI systems ....................................................................... 16 Article 24 – Obligations of product manufacturers .............................................................................. 16 Article 53, paragraph 6 – Modalities and conditions of the operation of AI regulatory sandboxes ..... 17 Article 61, paragraph 2 – Requirements relating to the post-market monitoring ................................ 17 Imprint ................................................................................................................................................ 18

3


European AI Act

I. Comments on selected amendments of the IMCO/LIBE DRAFT REPORT

Article 1 – Subject Matter Amendment 47, Article 1 - paragraph 1 - point a Summary of Amendment

Comments:

Amendment (AM) 47 provides that the development of AI systems is covered by the regulation, too.

The BDI supports the approach chosen by the European Commission in its regulatory proposal, to limit the subject matter of the regulation to the placing on the market, the putting into service and the use of AI systems. To foster innovation, research and development activities should be exempted from the scope of the regulation.

Amendment 48, Article 1 - paragraph 1 - point c a (new) Summary of Amendment

Comments:

Amendment (AM) 48 adds a new point c a to Article 1 (Subject Matter) of the AI Act: ‘(c a) harmonised rules on high-risk AI systems to ensure a high level of trustworthiness and of protection of health, safety, fundtmental rights and Union values enshrined in Article 2 TEU.’

The AM expands the overall regulatory objectives of the AI Act by referencing Article 2 TEU. Against this background, different provisions of the AI Act relating to high-risk AI systems also refer to Article 2 TEU (cp. e.g. AM 90 of the IMCO/LIBE draft report). In such cases, we ask the legislator to ensure that these requirements can be operationalised in practice. The same applies to the concept of ‘trustworthiness’. This objective should be fulfiled through compliance with the existing requirements of the AI Act, e.g. on transparency.

Article 2 – Scope Amendment 50, Article 2 – paragraph 1 – point a Summary of Amendment ▪

AM 50 replaces the role ‘providers’ with the role ‘operators’

Comments: ▪

With regard to the definition of roles in the AI Act , consistency should be ensured with the definitions in existing legal acts based on the New Legislative Framework (NLF).

4


European AI Act

Amendment 51, Article 2 - paragraph 1 - point c Summary of Amendment

Comments:

AM 51 adds another criterion (‘or affects natural persons within the Union’) to Article 2(1)(c) that, if present, brings providers and users of AI systems, located in a third country, into the scope of the regulation.

The BDI shares in general the objective of Article 2(1)(c) to prevent the circumvention of legal requirements of the regulation and to ensure a level playing field. However, the provision is too unspecific, as important terms such as ‘output’ are not precisely defined. Moreover, it’s not clear how the provision would affect, for example, the results of the analysis of large international data sets. AM 51 broadens the scope of Article 2(1)(c) even further and it is also not clear what ‘affects natural persons’ exactely means, as the meaning of ‘affect’ can be understood very broadly. The BDI recommends changingthe wording as follows: ‘affects directly natural persons’. In any case, an obligation for users to trace the origin of an AI result should be excluded.

Article 3 – Definitions Amendment 55, Article 3 – paragraph 1 – point 1 Summary of Amendment

Comments:

AM 55 provides for the deletion of the definition criteria ‘human-defined [objectives]’ in the definition of ‘AI system’. At the same time, it introduces the new criteria ‘hypotheses’.

The draft report basically retains the very broad definition of AI proposed by the European Commission. This definition would undoubtedly mean that conventional software and algorithms would also fall within the scope of the regulation. This would impose completely disproportionate burdens on many businesses. To ensure a specific and clearly delimitable scope of the AI Act, German industry urges the co-legislators to adopt a much more narrow definition of AI. In addition, for reasons of legal certainty, the definition of AI should refer - analogous to the approach of the OEDC - to the level of the ‘system’ (including hardware). Moreover, general purpose AI should be exempted from the scope of the AI act aside from systems that – due to their

5


European AI Act

concrete application – clearly meet the criteria of one of the risk categories, defined in the AI act.

Amendment 57, Article 3 – paragraph 1 – point 14 Summary of Amendment

Comment:

AM 57 expands and specifies the definition of ‘safety component of a product or system’

The term ‘safety component’ is a key criteria for the classification of AI systems as ‘high risk AI’ (cp. Article 6(1) of the AI Act). The definition of ‘safety component’, proposed by the European Commission as well as the IMCO/LIBE draft report, also includes the damage of properties. With regard to the overarching regulatory objectives of the AI Act, in particular the protection of fundamental rights of natural persons, this inclusion is very far-reaching. Therefore, the definition of ‘safety component’ should not include - in a blanket manner – all property damages. It should rather be limited to severe property damages which might have a negative impact on people’s health. Moreover, AM 57 adds the aspect of ‘security’ to the definition of ‘safety component’. The BDI rejects the mixing of ‘safety’ and ‘security’ at this point. Security aspects are already - and should still be - addressed by the provisions in Article 15 of the AI Act. In addition, coherence with the definition of ‘safety component’ in other legal acts, which the AI Act is referring to (e.g. the Machine Regulation), is needed.

Amendment 61, Article 3 – paragraph 1 – point 23 Summary of Amendment ▪

AM 61 adds ‘or a series of changes’ and ‘or to its performance’ to the definition of ‘substantial modification’

Comment: The notion ‘performance’ can be interpreted in very different ways. For reasons of legal certainty, this term needs to be specified. In this context, coherence to the definition of ‘substantial modification’ in other legal acts, which the AI Act is referring to (e.g. the Machine Regulation), is needed, too.

6


European AI Act

Article 10 – Data and Data Governance Amendment 96, Article 10 – paragraph 3 Summary of Amendment ▪

Whereas the Commission’s proposal requests that data sets should be ‘free of errors’, AM 96 provides for the approach that this goal is to be achieved ‘to the best extent possible, taking into account the state of the art’

Comment: ▪

The BDI welcomes this approach. The requirement, proposed by the European Commission, that data sets shall be free of errors, is de facto impossible to implement in practice.

Amendment 98, Article 10 – paragraph 5 Summary of Amendment

Comment:

AM 98 deletes Article 10(5) of the Commission’s proposal that allows - on the basis of strict preconditions - the processing of ‘special categories of personal data’ for bias control.

From an industry perspective, Article 10(5) of the Commission’s proposal provides for a helpful instrument for bias control. Therefore, we plead for the retention of this provision.

Article 16 – Obligations for providers of high-risk AI systems Amendment 106, Article 16 – paragraph 1 – point d Summary of Amendment

Comment:

AM 106 adds provisions to Article 16(1)(d) that specify the purposes for which logs have to be saved

The BDI welcomes this specification as it provides for more legal certainty.

Article 23 – Cooperation with competent authorities Amendment 121, Article 23 – paragraph 1 Summary of Amendment ▪

AM 121 – among other things - extends the obligation to provide a national competent authority (or where applicable, the Board or the Comission) with ‘all the information and documentation necessary to demonstrate the conformitiy of the high-risk AI system’ ‘where applicable [to] users’ of high risk AI systems.

Comment: ▪

On the one hand, we welcome the extension of obligations at this point as it provides for a fairer distribution of responsibilities between ‘providers’ and ‘users’ of AI systems, e.g. with regard to demonstrating compliance of an AI system with Article 15(4)(2) of the AI Act. However, the provision of information should be limited to constellations in which the receiving authority can

7


European AI Act

demonstrate a ‘reasoned request’. Moreover, it is not clear why the Commission and the Board should also (besides the national competent authority) receive the sensitive data.

Amendment 122, Article 23 – new paragraph 1 a Summary of Amendment

Comment:

AM 122 empowers national competence authorities and the Commission ‘upon a reasoned request’ to request access to logs automatically generated by a highrisk AI system.

The AI act already contains several provisions requiring companies to disclose varios categories of data to authorities. Therefore, we believe that it is not necessary to introduce another – very unspecific – disclosure obligation in Article 23.

Article 28 – Obligations of distributors, importers, users or any other third-party Amendment 131, Article 28 – paragraph 1 – point a Summary of Amendment

Comment:

AM 131 adds the possibility of contractual arrangements, governing the allocation of obligations, to Article 28 paragraph 1 point a of the AI Act.

The BDI welcomes this addition, as it increases the flexibility for companies to adapt the allocation of responsibilities to the specific context.

Amendment 132, Article 28 – paragraph 1 – point b a (new) Summary of Amendment

Comment:

AM 132 adds the provision to Article 28 paragraph 1 that the modification of the intended purpose of non-high risk AI (already placed on the market or put into service) in such a way that the AI system has to be regarded as high-risk AI, leads to a transfer of provider obligations to distributors, importers, users and other third parties, too.

We welcome this addition, as it creates a level playing field.

8


European AI Act

Amendment 134, Article 28 – paragraph 1 – point c a (new) Summary of Amendment

Comment:

AM 134 adds the provision to Article 28 paragraph 1 that the substantial modification of non-high risk AI in such a way that the AI system has to be regarded as high-risk AI, leads to a transfer of provider obligations to distributors, importers, users and other third parties, too.

Analogous to AM 132, the BDI welcomes this addition as it creates a level playing field.

Article 29 – Obligations of users of high-risk AI systems Amendment 136, Article 29 – paragraph 1 a (new) Summary of Amendment

Comment:

AM 136 adds a new obligation for users of high-risk AI systems in Article 29: ‘1 a. Where relevant, users of high-risk AI systems shall comply with the human oversight requirements laid down in this Regulation.’

This provision is too unspecific (what does ‘where relevant’ exactly mean) and needs to be concretised in order to allow for a legally secure application.

Amendment 142, Article 29 – paragraph 5 Summary of Amendment

Comment:

AM 142 specifies the purposes for which users of high-risk AI systems shall keep logs according to Article 29, paragraph 5.

The adequacy provision contained in paragraph 5 must be, analogously to Article 20 of the draft regulation, complemented urgently by the criterion of technical feasibility, as in practice, technical restrictions may prevent the retention of log data to the extent, and for the period and purposes required by Article 29.

Article 40 – Harmonised standards Amendment 160, Article 40 – paragraph 1 a (new) Summary of Amendment

Comment:

AM 160 adds the requirement to Article 40 that all relevant stakeholders should be represented in the process of developing harmonised standards in

The BDI welcomes this amendment as it strengthens the participation of relevant stakeholders in the very important process of developing harmonised standards that are the basis for the ability of companies to operationalise and

9


European AI Act

accordance with Articles 5, 6 and 7 of Regulation (EU) No 1025/2012.

implement the provisions of the AI Act effectively.

Article 41 – Common specifications Amendment 161, Article 41 – paragraph 2 Summary of Amendment ▪

AM 160 introduces the obligation that the Commission, when preparing common specifications, shall also consult ‘other relevant stakeholders.”

Comment: From the perspective of German industry, the development of harmonsied standards should always have priority or be the preferred option compared to common specifications. If the adoption of common specifications is actually necessary in justified exceptional cases, all relevant stakeholder groups shall be given the opportunity to participate in the process. Therefore, we welcome AM 160.

Article 44 – Certificates Amendment 162, Article 44 – paragraph 2 Summary of Amendment ▪

AM 162 provides, among other things, for the shortening of the period of validity of certificates from five to four years.

Comment: We prefer a period of validity of five years as originally proposed by the Commission in order to reduce administrative burdens for companies.

Article 48 – EU declaration of conformity Amendment 166, Article 48 – paragraph 2 Summary of Amendment

Comment:

We reject the inclusion of data protection requirements at this point (provisions on the EU declaration of conformity). It is neither necessary, nor would it be in coherence with the approach of how (Union) data protection law shall be complied with. It’s because the processing of personal data by AI Systems is - as a matter of course - already subject to the GDPR. Moreover, a data protection law compliant concept can only be developed at a later stage

AM 166 provides that the EU declaration of conformity should also state that highrisk AI systems meet ‘requirements related to the respect of the Union data protecition rules’.

10


European AI Act

after the issuing of a declaration of conformity, when the planed (individual) usage of the respective AI system is clear, and the specific circumstances of this usage are known. This is because even the same way of usage might pose different risks for privacy, depending on the circumstances of the individual application. At the time of the issuing of a declaration of conformity however, it is not clear in which (different) ways the AI system can or will be used and also the specific circumstances of the (different) individual usages cannot be foreseen.

Amendment 167, Article 48 – paragraph 3 Summary of Amendment ▪

AM 167 replaces ‘shall’ by ‘can’ in the context of drawing up a single EU declaration of conformity in cases in which high-risk AI systems are also subject to other Union harmonisation legislation.

Comment: ▪

We reject this AM. It is very important to retain ‘shall’ in order to secure coherence with other legal acts and reduce administrative burdens for companies.

Article 49 – CE marking of conformity Amendment 170 – Article 49 – paragraph 3a (new) Summary of Amendment

Comment:

AM 170 adds a new paragraph 3a to Article 49: ‘3 a. The CE marking shall be affixed only after assessment of the compliance with Union data protection law.’

We reject AM 170: The inclusion of data protection requirements in the context of the CE marking of an AI system is neither necessary, nor would it be in coherence with the approach of how (Union) data protection law shall be complied with. It’s because the processing of personal data by AI Systems is - as a matter of course - already subject to the GDPR. Moreover, a data protection law compliant concept can only be developed at a later stage after CE marking, when the planed (individual) usage of the respective AI system is clear, and the specific circumstances of this usage are known. This is because even the same way of

11


European AI Act

usage might pose different risks for privacy, depending on the circumstances of the individual application. At the time of CE marking however, it is not clear in which (different) ways the AI system can or will be used and also the specific circummstances of the (different) individual usages cannot be foreseen.

Article 56 – Establishment of the European Artificial Intelligence Board Amendment 184 – Article 56 - paragraph 2 a (new) Summary of Amendment

Comment:

AM 184 inserts a new paragraph 2a to Article 56: ‘2 a. The Board shall contribute to the effective and consistent enforcement of this Regulation throughout the Union […]’

The BDI welcomes this expansion of the conmpetences of the Board as it supports the uniform application of the AI Act which is necessary to avoid a fragementation of the European single market in this area.

Article 57 – Structure of the Board Amendment 191 – Article 57 – paragraph 2 a (new) Summary of Amendment ▪

AM 191 separates the provision that ‘the Board may establish sub-groups as appropriate for the purpose of examining specific questions.’ from Article 57(2) into a new paragraph 2a.

Comment: ▪

The provision should be complemented by the requirement that it must be ensured that the members of these subgroups have state-of-the-art expertise and undergo ongoing training.

Amendment 196 – Article 57 – paragraph 3 c (new) Summary of Amendment

Comment:

AM 196 introduces the obligation that the Board ‘shall organise consultations with stakeholders twice a year’ and specifies the organisations that are meant by this.

Since the working results of the Board will probably have a high binding effect in practice, a comprehensive and institutionalised stakeholder participation in the work of the Board is of high importance. We welcome the obligation, introduced by AM 196, to consult relevant stakeholders. However, we would not limit the number of consultations to two per year. Mandatory stakeholder

12


European AI Act

consultations should instead take place ‘when needed”.

Article 58 – Tasks of the Board Amendment 200 – Article 58 – paragraph 1 – point a a (new) Summary of Amendment

Comment:

AM 200 expands the tasks of the Board - among others - to ‘ensuring the consistent implementation of this Regulation.’

The BDI welcomes this amendment as it contributes to a more consistent implementation of the Regulation within the EU (cp. BDI’s comment on AM 184)

Amendment 201 – Article 58 – paragraph 1 – point a b (new) Summary of Amendment ▪

Comment:

AM 201 allocates the following competences to the Board:

As stated in Article 40 of the draft regulation, the application of harmonised standards establishes a presumption of conformity with the requirements of the AI Act. This basic principle is of utmost importance and should not be touched. Therefore, a precise delination of harmonised standards from the guidelines, recommendations and best practices, published by the Board, is needed in order to avoid overlapping.

‘(a b) examine […] any question covering the application ot this Regulation and issue guidelines, recommendations and best practices […]’

Article 59 – Designation of national competent authorities Amendment 210 – Article 59 – paragraph 4 Summary of Amendment

Comment:

AM 210 supplements Article 59 paragraph 4 - among other things - with the provision that national competent authorities shall also be provided with adequate ‘technical’ ressources.

Public Authorities must be equipped with, or draw on adequate ressources, as well as the necessary technical expertise to be able to adequately fulfil the tasks envisaged by the regulation. Therefore, we welcome that AM 210 recognises the importance of adequate technical ressources.

13


European AI Act

Article 68 – Formal non-compliance Amendments 253 - 261 – Articles 68 a - 68 i (new) Summary of Amendment

Comment:

AMs 253-261 add a new chapter to the AI Act setting out the conditions for an intervention of the Commission in enforcing the regulation. The competences, granted to the Commission in this context, are very extensive (e.g. investigation and enforcement powers; right to access data and documentation related to an AI system; right to ‘reverse engineer the AI system’).

The legislative proposal, presented by the European Commission, already grants comprehensive enforcement competences to market surveillance authorities. Therefore it must be critically questioned (also with regard to possible overlaps of competences) if it is really appropriate that AMs 253-261 allocate similar powers for the Commission.

Amendment 262 – Proposal for a regulation Article 68 j (new) Summary of Amendment

Comment:

AM 262 establishes a new Article 68 j (‘Right to lodge a complaint’) giving ‘Natural persons or groups of natural persons affected by an AI system […] the right to lodge a complaint against the providers or users of such AI system […] if they consider that their health, safety, or fundamental rights have been breached.’

The BDI rejects the amendment in its present form, as it does not contain any provisions that aim at the prevention of an abusive use of this instrument by third parties.

Article 70 – Confidientally Amendment 267 – Article 70 – paragraph 1 a (new) Summary of Amendment

Comment:

AM 267 adds the provision that the Commission, the Board, national competent authorities, as well as notified bodies ‘shall put in place adequate cybersecurity and organizational measures to protect the security and confidentiality of the information and data obtained in carrying out their tasks and activities.’

We welcome the obligation to implement adequate cybersecurity and organisational measures. The AI Act contains extensive disclosure obligations for companies that encompass trade secrets and other sensitive information, which have to be protected by the receiving entities in the best possible way.

14


European AI Act

Amendment 270 – Article 70 – paragraph 4 Summary of Amendment ▪

Comment:

According to AM 270, confidential information may only be exchanged with regulatory authorities of third countries if among other things - this exchange is ‘strictly’ necessary.

The BDI welcomes that the threshold for an exchange of confidential data is raised due to the new wording of Article 70 paragraph 4. Nevertheless, the criterion ‘strictly necessary’ is still rather unspecific, making the legal basis for the data exchange with third countries rather intransparent.

Article 71 – Penalties Amendment 274 – Article 71 – paragraph 8 b (new) Summary of Amendment

Comment:

AM 274 adds a new paragraph to Article 71, defining cases in which the Commission is allowed to impose ‘on the operator concerned fines not exceeding 2% of the total turnover in the preceding financial year […]’

Without doubt, the proposed level of sanctions is very high. Therefore it should be reviewed again for its appropriateness.

Annex III – High-Risk AI systems refered to in Article 6(2) Amendment 281 – Annex III – paragraph 1 – point 2 – point a Summary of Amendment ▪

AM 281 expands point a by adding ‘or security [components]’ and ‘internet’.

Comment: ▪

German industry rejects the addition of ‘internet’ to Annex III, paragraph 1, point 2(a). From our perspective, it is on the one hand factually not appropriate to classify AI systems intended to be used as safety or security components in the supply of internet across the board as high-risk AI, taking into account, for example, their positive effects on the security of supply. In this sense, the present proposal would lead to a sector-based regulation, not a risk-based approach. Moreover, the term ‘internet’ is very unspecific. In practice, this would lead to a high level of legal uncertainty for companies.

15


European AI Act

II. Key messages on issues not addressed in the report

Article 4 – Amendments to Annex I Amendments to Annex I (Artificial Intelligence Techniques and Approaches) should not - as provided for in Article 4 of the draft regulation - be made by means of a delegated act. Since the definition of AI is an essential provision of the regulation, amendments should only be adopted in an ordinary legislative procedure. In general, delegated acts may only relate to supplementing or amending ‘non-essential’ provisions in EU basic acts (Article 290(1) TFEU). All central requirements and provisions of the AI Act must thus be defined in the legal act itself, so that the principles of the rule of law and certainty are respected, and may not be introduced or amended later by delegated acts.

Article 6 – Classification rules for high-risk AI systems From an industry perspective, the definition of ‘high-risk AI systems’ in Article 6 paragraph 1 of the draft regulation, proposed by the European Commission, is viewed very critically. The definition is clearly too broad, as it means that non-critical industrial AI applications are also regarded as ‘high-risk AI systems’. The consequence would be disproportionate regulatory requirements for providers and users of industrial AI, which would ultimately inhibit innovation. Moreover, the existing national and European product safety law already lays down comprehensive safety requirements that also cover AI as a risk issue. Against this background, there is a risk of double regulation. Therefore, the European AI Regulation should only cover areas for which a regulatory gap has been demonstrated. On the other hand, industrial AI systems that are already regulated by existing law should be excluded from the scope of the regulation. In areas where a regulatory gap exists, additional criteria should be used to classify AI systems as high-risk, allowing a more precise case-by-case evaluation. These criteria include, for example, human supervision of an AI system or the existence of control mechanisms, as these factors can have a significant impact on the level of risk posed by an AI system.

Article 24 – Obligations of product manufacturers According to Article 24 of the draft regulation, as proposed by the European Commission, the manufacturer of a product that contains a high-risk AI system shall take ‘the responsibility of the compliance of the AI system’ and is also, with regard to the AI system, subject to the provider obligations of the AI Act. However, in such cases product manufacturers should be exempted from obligations that can realistically only be fulfilled by the provider of the implemented AI system. This includes, for example, the requirement in Article 16 of the draft regulation (Obligations for providers of high-risk AI systems) to draw-up the technical documentation of a high-risk AI system, since the information required for a technical documentation usually remains (in particular for reasons of protecting trade secrets) with the provider, and is not passed on to the product manufacturer. In addition, the provisions of Article 24 of the draft regulation should not result in a requirement to conduct a double certification of AI systems.

16


European AI Act

Article 53, paragraph 6 – Modalities and conditions of the operation of AI regulatory sandboxes If technical details for the operation of AI regulatory sandboxes are to be defined under the procedure set out in Article 53(6) of the AI Act, companies and research institutions should be fully involved in this process.

Article 61, paragraph 2 – Requirements relating to the post-market monitoring According to the provisions of Article 61 of the draft regulation (as proposed by the European Commission) providers of high-risk AI systems shall set up a system for monitoring the AI system after it has been placed on the market, in order to continuously analyse the compliance, as well as the ‘performance’ of the AI system. From BDI’s point of view, this extensive requirement for providers of high-risk AI systems can hardly be implemented in practice, as the preconditions listed in Article 61(2) of the draft regulation, in particular the availability of the data necessary to comply with the requirement, are often not given. This applies, for example, to frequently existing case constellations in which the operational data or process data of an AI system remains, for reasons of confidentiality, entirely with the user. The same applies when a product manufacturer places a high-risk AI system in one of its products on the market, or puts it into operation in accordance with Article 24 of the draft regulation. In this case, only the product manufacturer usually receives the field data of its end product, but not the provider of the high-risk AI system. Against this background, the provisions of Article 61 of the draft regulation should be limited to requirements that can typically be fulfilled by a provider of high-risk AI systems.

17


European AI Act

Imprint Bundesverband der Deutschen Industrie e.V. (BDI) Breite Straße 29, 10178 Berlin www.bdi.eu T: +49 30 2028-0 EU Transparency Register: 1771817758-48 Lobbyregisternummer R000534

Editors Oliver Klein Senior Manager Digitalisation and Innovation T: +49 30 2028-1502 o.klein@bdi.eu Stefanie Ellen Stündel Senior Manager Digitalisiation and Innovation T: +32 27 921015 s.stuendel@bdi.eu

BDI document number: D 1573

18


Turn static files into dynamic content formats.

Create a flipbook
Issuu converts static files into: digital portfolios, online yearbooks, online catalogs, digital photo albums and more. Sign up and create your flipbook.