BDI position on the AI Liability Directive

Page 1


POSITION | LIABILITY RULES | ARTIFICIAL INTELLIGENCE

AI Liability Directive

BDI position on the European Commission's proposal for a Directive

4 December 2024

The deliberations on the European Commission's proposal for a Directive on adapting non-contractual civil liability rules to artificial intelligence ("AI Liability Directive") from September 2022 were suspended in the last legislative period in the Council and European Parliament due to the parallel negotiations on the AI Act The European institutions have now restarted the discussion on the draft directive. The European Commission has published a new draft adapted to the AI Act, which is currently considered in the Council and European Parliament. The Federation of German Industries (BDI) has the following position on the draft directive:

No need for an AI Liability Directive

The AI Act, which came into force on 1 August 2024 and must be applied from August 2026 or August 2027, depending on the risk classification, provides for extensive requirements on data disclosure, risk management and compliance with due diligence obligations, which will significantly influence and change the AI landscape in Europe. The recently adopted new Product Liability Directive (EU) 2024/2853, which must be transposed into national law by the end of 2026, has also massively expanded the liability framework for software applications and artificial intelligence and led to stricter rules Additionally, there is the parallel applicable national tort law, which, in addition to product liability, also provides for the possibility of claims for damages caused by AI systems. Overall, it will be much easier in future for plaintiffs who wish to claim damages caused by AI to obtain information about the underlying AI system and enforce their claim in court

From the BDI's point of view, there is currently no legal loophole and no need for further regulation in the form of a separate AI liability directive. Instead, the newly adopted major legislative packages should first be implemented sensibly and tested in practice. Only if after an appropriate period of application and a new evaluation a need for additional regulation is proven, targeted new measures can be envisaged. This would also be in line with the European institutions' much-announced target of better legislation and the avoidance of unnecessary new regulation.

Impact on AI development in Europe

Unbureaucratic and innovation-friendly framework conditions for the development and use of AI are a central prerequisite for securing the innovation and competitiveness of European industry in this important key technology. The AI Act foresees a comprehensive new regulatory package that will have far-reaching effects on the safety of the development of new AI models in Europe but will also lead to significant effort and high implementation costs for AI developers and providers due to the extensive new due diligence, reporting and risk management obligations as well as the many questions and uncertainties that remain unanswered in the AI Act.

An excessive liability framework and the concern about "fishing expeditions" and hardly rebuttable presumptions of evidence could drive many AI providers and deployers to unnecessary over-compliance with the AI Act, which could have a negative impact on innovation and AI development in Europe and further increase costs. This is particularly true if the AI Liability Directive does not follow the riskbased approach of the AI Act and, for example, also foresees rebuttable presumption rules for nonhigh-risk AI systems in Article 4.

Effects on national civil law

Through new legal concepts, such as disclosure procedures at the request of "potential claimants", and through significant changes to the principles of evidence, the proposed directive would have a significant impact on the general tort and civil procedural law of the Member States and shift the system of judicial enforcement of claims for damages, which has been tried and tested for decades, to the detriment of AI providers and deployers. This will present national judges with difficult questions regarding the applicable procedural rules, especially in cases where it is not clear in advance whether a damage is due to an AI application or has other causes, or where the damage has occurred due to several causes.

Case law has already shown in the past that it is able to adequately apply the existing liability rules to new technologies and developments without the need to introduce a new liability framework. The BDI is of the opinion that the existing principles can also adequately cover AI and that any changes to general national civil law, for example with regard to the burden of proof, must be left to the national legislator.

Request: The negotiations on the proposal for an AI Liability Directive should not be continued.

Instead, the provisions of the AI Act and the new Product Liability Directive must first be implemented and applied. For these reasons, there is also no need for further-reaching plans, such as extending the AI Liability Directive to the entire software sector, converting the directive into a regulation, introducing new AI categories that are not defined in the AI Act or general strict liability rules for the AI sector.

Should the negotiations on the draft directive nevertheless be continued, the BDI positions itself on the main proposals of the AI Liability Directive as follows:

Article 3 - Disclosure of evidence and rebuttable presumption of infringement

Article 3 provides that national courts may order the disclosure of relevant evidence by an AI provider or deployer. Even though similar disclosure provisions can now be found in European law in isolated cases, most recently in the Product Liability Directive, the right to disclosure still represents a foreign element in the continental European legal tradition and unnecessarily calls into question traditional principles of evidence, according to which each party must in principle present the facts in their own favour This applies in particular if even "potential claimants" are allowed to request a disclosure of evidence. The risk of abusive "fishing expeditions" is particularly high here. Current law already provides for sufficient mechanisms to achieve appropriate results without extensive disclosure of evidence by the defendant. In German law, for example, there are the legal institutions of the “secondary burden of presentation and proof” as well as producer liability Article 3 should therefore be deleted, or at least limited to what is appropriate and proportionate. For example, the current proposals do not provide for sufficient safety measures to protect companies from abusive disclosure or from the publication of sensitive business data or trade secrets.

Points of criticism in Article 3:

▪ Rights of the "potential" claimant: Article 3 (1) of the proposal provides that potential claimants who have not yet asserted their claim for damages in court may, under certain circumstances, apply for the disclosure of evidence. This represents a completely new legal construct, which is also not reflected in the recently adopted Product Liability Directive and is likely to lead to major implementation difficulties in national civil law. In particular, it would be necessary to narrow down the scope of potential claimants in order to prevent abusive requests for information. Recital 17 justifies this new concept in particular by stating that the possibility of accessing relevant evidence before filing an action should make it easier to identify the correct defendant. However, under the AI Act, AI developers and deployers will have to implement extensive documentation obligations and will quickly slip into the "provider" role within the distribution of roles in the value chain, which is associated with special risk management and registration obligations. It should first be examined whether the requirements in the AI Act will not be sufficient to determine the correct opposing party before new and complex pre-trial disclosure of evidence claims have to be implemented into national law.

▪ Submission of relevant evidence: It is not defined in more detail which evidence should be considered to be "relevant". Without further limitation, the disclosure claim could easily lead to disproportionate inquiries. Disclosure requests must be specific and the claimant must demonstrate a reasonable interest and the relevance of the evidence to be produced. "Relevant evidence" could be specified as “documentation, information, and logging requirements required under the AI Act or integrated into Union Harmonisation legislation listed in Section B under Annex I of the AI Act.” It must also be ensured that the defendant is granted reasonable time limits for the disclosure.

▪ Plausibility of the claim: It is important that not only the "potential" claimant, but also the actual claimant in court proceedings must sufficiently prove the plausibility of their claim for damages before disclosure is ordered However, it would be more appropriate to raise the standard overall and require proof of the "probability" of a successful claim for damages instead of proof of "plausibility"

▪ Protection of business secrets: The draft directive provides for a certain level of protection in relation to the disclosure of trade secrets and confidential information. However, the draft merely

refers to the Directive on the Protection of Trade Secrets 2016/943, which contains very abstract definitions. In order to increase legal certainty, it would be better to define in the AI Liability Directive itself which information in connection with AI is considered a trade secret worthy of protection, possibly in the form of a catalogue of examples. In particular, it should be prevented that a defendant has to disclose the algorithms or complete training data underlying the AI system.

▪ Presumption effect under Article 3(5): If the defendant does not comply with the order to disclose evidence, it is presumed that he has breached a relevant duty of care, which in turn has an impact on the presumption of causality in Article 4. This is intended to increase the pressure on the defendant to comply with the disclosure of evidence. However, the provision is completely inappropriate. In the event of non-compliance with a court order, the court should be allowed to impose a customary sanction, such as a fine, to enforce the measure. Under no circumstances, however, can non-compliance with the order have legal consequences on the substance of the case and imply that the defendant is at fault. In addition, it is currently not even clear how much time the defendant has to submit the evidence before the presumption applies. If Article 3 is retained at all, paragraph 5 should at least be deleted.

Article 4 - Rebuttable presumption of a causal link in the case of fault

Article 4 provides for a presumption of causality between the defendant's fault in the form of a breach of a duty of care and the output produced by the AI system. Even if this is not formally a complete reversal of the burden of proof, the rebuttable presumption leads to the same de facto result. The AI provider or deployer must provide evidence to the contrary in order to convince the court that the presumption does not apply. Such "negative" evidence is usually difficult to provide. This leads to higher liability risks for the developers, providers and deployers of AI systems, especially if unreasonably broad and unclear factual requirements lead to a high degree of discretion on the part of national judges. If the presumption of liability is retained, it must be linked to restrictive conditions and clear definitions that ensure that causality is only presumed to an appropriate and proportionate extent. Important restrictions are:

▪ Restriction to high-risk AI systems: The fact that Article 4, unlike Article 3, also refers to harm caused by non-high-risk AI systems is inappropriate and contradicts the risk-based approach of the AI Act. AI systems that are categorised as neither prohibited nor high risk have very limited obligations under the AI Act. This should also be reflected in the liability and burden of proof rules. The rebuttable presumptions should therefore only apply, if at all, to high-risk AI systems. Otherwise, the AI Liability Directive could have a negative impact on the development and emergence of new non-high-risk AI systems in the EU, as there is a legitimate risk that AI developers will always adhere to the stricter risk management obligations for high-risk AI as a precautionary measure due to the stricter liability rules of the AI Liability Directive, regardless of how their system is categorised under the AI Act This would also unnecessarily increase corporate costs for compliance with the AI Act.

▪ Limitation of the duty of care: The limitation of fault in Article 4(1)(a) to a breach of a duty of care whose direct purpose is to protect against the damage that has occurred is important and must be retained. An even clearer limitation would be achieved by focussing only on duties of care that are defined in the AI Act and whose direct purpose is to protect against the damage that has occurred.

Non-compliance with the requirements of the AI Act must be relevant to the damage in order for the rebuttable presumption to apply.

▪ Exceptions to the presumption rule/clarification of undefined legal terms: The exceptions to the application of the presumption rule in paragraphs 4 and 5 should be retained, but contain many undefined legal terms that require interpretation and need to be explained in more detail. This applies in particular to the terms "reasonably accessible", "sufficient evidence and expertise" and "excessively difficult" Further clarification is urgently required here.

Imprint

Federation of German Industries (BDI)

Breite Straße 29, 10178 Berlin www.bdi.eu

T: +49 30 2028-0

Contact

Nadine Rossmann

Law, Competition and Consumer Policy

T: +32 2 792-1005 n.rossmann@bdi.eu

Polina Khubbeeva

Digitalisation and Innovation

T: +49 30 2028-1586 p.khubbeeva@bdi.eu

EU Transparency Register: 1771817758-48

German Lobby Register: R000534

BDI document number: D2020

Turn static files into dynamic content formats.

Create a flipbook
Issuu converts static files into: digital portfolios, online yearbooks, online catalogs, digital photo albums and more. Sign up and create your flipbook.