ACI Insights - U.S. Implications of the EU AI Act

Page 1


U.S. Implications of the EU AI Act

U.S. Implications of the EU AI Act

The european Union (eU) is leading the global charge with AI regulations. U.S. companies are not beyond their regulatory reach, however, and should be preparing their AI risk mitigation efforts accordingly.

“The EU is taking a highly regulated approach to AI,” said Edward Turtle, an associate at law firm Cooley in the firm’s London office. At the center of it all is “a completely new, bespoke AI regulatory regime, commonly referred to as the AI Act.”

On May 21, 2024, the Council of the European Union gave final approval of the AI Act, following the EU Parliament’s vote to adopt the legislation on March 13. The European Council’s final vote cleared the way for its formal signature and publication in the Official Journal of the EU.

Turtle noted that the importance of the AI Act is that it is “the first law in the world to specifically regulate AI technologies on a horizontal basis, meaning across all sectors.”

from a legal and compliance standpoint, the international scope of the AI Act regulates AI systems deployed by companies wherever they are located in the world, including U.S. companies, so long as the AI systems affect users in the eU.

moreover, the reverberations of the AI Act could potentially reach other countries, who may use it as a regulatory blueprint, and so even companies whose AI systems do not affect users in the eU should still be on alert. “many are predicting that eU regulation on AI will be influential in determining how AI systems are regulated elsewhere in the world, including in the United States,” Turtle said.

High-Risk AI Systems

As laid out by the european Commission, the AI rules will, in part:

• Address risks specifically created by AI applications;

• Prohibit AI practices that pose unacceptable risks;

• Determine a list of high-risk applications;

• Set clear requirements for AI systems for high-risk applications; and

• Define specific obligations for deployers and providers of high-risk AI applications.

the AI Act takes a risk-based approach, meaning that the higher the risk an Ap application poses, the more compliance obligations companies must meet. AI systems that pose a “clear threat to the safety, livelihoods, and rights of people will be banned,” the european Commission stated.

Examples of AI systems identified as high-risk include AI used in critical infrastructures that could put the life and health of citizens at risk; product safety components – for example, AI applications in robot-assisted surgeries; and credit scoring that denies certain citizens opportunities to obtain loans. these are just a few examples. there are several other ways in which AI could pose a high risk.

A briefing issued by the European Parliament explained that providers of high-risk AI systems will have to run a “conformity assessment procedure” before their products can be sold and used in the EU. They’ll need to comply with a range of requirements, including for testing, data training, and cybersecurity. In some cases, they’ll have to conduct a “fundamental rights impact assessment” to ensure their systems comply with eU law.

The conformity assessment should be carried out either based on a self-assessment or with the involvement of a notified body. Compliance with european harmonized standards, which are yet to be developed, will grant high-risk AI systems providers a “presumption of conformity,” the briefing stated. “After such AI systems are placed in the market, providers must implement post-market monitoring and take corrective actions if necessary,” the European Parliament’s briefing explained.

the AI Act further introduces “limited risk” transparency obligations. If a company uses chatbots, for example, humans should be made aware that they are interacting with a machine so they can make an informed decision whether or not to continue, the european Commission advised.

The European Commission further advised providers “to ensure that AI-generated content is identifiable.” Additionally, the AI Act requires that AI-generated text published to inform the public on matters of public interest must be labeled as “artificially generated,” including “audio and video content constituting deep fakes.”

Hefty Fines

fines for non-compliance depend on the type of violation, but compliance failures will be costly nonetheless. for the most severe violations, a company can face fines of up to 35 million euros, or up to 7% of a company’s annual worldwide turnover for the preceding financial year, whichever is higher, for the prohibited use of AI systems.

Other less severe AI violations may result in fines of up to 15 million euros, or 3% of a company’s annual worldwide turnover for the preceding financial year, whichever is higher. On the less severe end, providing incorrect or misleading information can result in fines of up to 7.5 million EUR, or 1% of a company’s annual turnover.

AI Act Compliance Obligations

In light of the AI Act’s passage, and taking into consideration the Act’s provisions, companies may want to consider starting with the following baseline measures:

• Assess whether, where, and how the company uses AI. make this a cross-functional effort, getting insights from all key stakeholders throughout the business.

• Determine whether the use of the AI system(s) falls under the high-risk or limited-risk category. As required by the AI Act, run a “conformity assessment procedure.” for those in the banking and insurance industries, in particular, a “fundamental rights impact assessment” may be needed to “efficiently ensure that fundamental rights are protected,” the AI Act states.

• Develop appropriate, risk-based policies and procedures for employees and third-party vendors, and train and communicate on those new policies and procedures. Define specific obligations for deployers and providers of high-risk AI applications.

Legal counsel and compliance officers also will want to stay on top of the latest AI regulations and industry-leading standards. One example is the National Institute of Standards and Technology’s AI Risk Management Framework (AI RMF) and its recently released draft AI RMF Generative AI Profile, which NIST said is intended “to help organizations identify unique risks posed by generative AI and proposes actions for generative AI risk management that best aligns with their goals and priorities.”

“Compliance with international AI and life sciences regulations will be key to mitigating liability risks,” turtle concluded, “meaning that maintaining compliance with increasingly complex global regulations will be more important than ever.”

ACI will be holding its conference on “AI Law, Ethics, Safety & Compliance” on Sept. 25–26 in Washington DC. For more information, and to register, please visit: https://www.americanconference.com/AI-Law/

VIEW

For questions, concerns or more information about ACI Insights, please contact:

American Conference Institute | The Canadian Institute | C5 e: c.corbin@americanconference.com

Turn static files into dynamic content formats.

Create a flipbook
Issuu converts static files into: digital portfolios, online yearbooks, online catalogs, digital photo albums and more. Sign up and create your flipbook.