Analyzing the Colorado AI Act

Page 1


Analyzing the Colorado AI Act

David Stauss, Partner, CIPP/US/E, CIPT, FIP, PLS

Erik Dullea, Partner, CIPP/US, CIPMM

Shelby Dolen, Attorney, CIPP/US

Owen Davis, Attorney

Roadmap

1. Background and Timeline

Scope and Relevant Definitions 3. Developer Obligations 4. Deployer Obligations 5. AI Disclosure Obligations

Exemptions

What to Do Now 8. Other AI Laws & Pending Bills

Resources

Background and Timeline

Multistate Work Group

Organized by Connecticut Senator James Maroney

Facilitated by Future of Privacy Forum

Bi-partisan group of lawmakers from nearly 30 states

Met seven times during last summer and fall

Heard from AI experts from across multiple fields and geographies

What Problem Does the Law Try to Address?

“In recent years, algorithmic decision-making has produced biased, discriminatory, and otherwise problematic outcomes in some of the most important areas of the American economy. ”

“Mounting evidence reveals that algorithmic decisions can produce biased, discriminatory, and unfair outcomes in a variety of high-stakes economic spheres including employment, credit, health care, and housing.”

FTC Commissioner Rebecca Kelly Slaughter, Algorithms and Economic Justice: A Taxonomy of Harms and a Path Forward for the Federal Trade Commission, Yale Journal of Law & Technology, August 2021

Balancing Act

• Regulating in complex and rapidly changing area hurts businesses

• Regulation hurts innovation

• Patchwork of state laws hurts businesses

• US cannot fall behind like it did with privacy regulation

• Guardrails can provide for responsible innovation

• Businesses can be protected through rebuttable presumption, affirmative defense, and no private right of action

• Law is limited in scope (current focus is on discrimination in high-risk applications)

Basic Framework

Responsible deployment of high-risk AI systems

Provide disclosures and notices Review as needed

Provide consumer with rights

Notify relevant parties if highrisk AI discriminates

Timeline

May 17, 2024

• Signed into law

Aug. 1, 2024

• Task force creation deadline

Feb. 1, 2025

• Task force report deadline

1 year, 8 months, 15 days

Feb. 1, 2026

• Effective

Scope and Relevant Definitions

Applicability: High-Risk AI System

Artificial Intelligence System Deployed Makes a consequential decision

Is a substantial factor in making a consequential decision OR

Key Terms

Artificial Intelligence System

“Any machine-based system that, for any explicit or implicit objective, infers from the inputs the system receives how to generate outputs, including content, decisions, predictions, or recommendations, that can influence physical or virtual environments”

To “use a high-risk artificial intelligence system”

Consequential Decision

Legal service

Education enrollment or education opportunity

Insurance

Decision that has material legal or similarly significant effect on provision or denial to any consumer of, or cost or terms of:

Employment or employment opportunity

Healthcare services Housing

Financial or lending service

Essential government service

Substantial Factor

A factor that:

1. assists in making a consequential decision;

2. is capable of altering the outcome of a consequential decision; and

3. is generated by an artificial intelligence system

Includes

Any “use of an artificial intelligence system to generate any content, decision, prediction, or recommendation concerning a consumer that is used as a basis to make a consequential decision concerning the consumer”

Exclusions

AI System intended to:

• perform a narrow procedural task, or

• detect decision-making patterns or deviations from prior decision-making patterns and is not intended to replace or influence a previously completed human assessment without sufficient human review

Exclusions

Unless it makes or is a substantial factor in making a consequential decision:

• Anti-fraud technology that does not use facial recognition technology

• Anti-malware

• Anti-virus

• Artificial intelligence-enabled video games

• Calculators

• Cybersecurity

• Databases

• Data storage

• Firewall

• Internet domain registration

• Internet website loading

• Networking

• Spam and robocall-filtering

• Spell-checking

• Spreadsheets

• Web caching

• Web hosting or any similar technology; or

• Technology that communicates with consumers in natural language for the purpose of providing users with information, making referrals or recommendations, and answering questions and is subject to an accepted use policy that prohibits generating content that is discriminatory or harmful.

Developer Obligations

Developer v. Deployer

Developer

• Person doing business in Colorado that develops or intentionally and substantially modifies an AI system

• “Intentional and substantial modification”

• Deliberate change made to AI system that results in new reasonably foreseeable risk of algorithmic discrimination

• Subject to exceptions

• Person doing business in Colorado that uses high-risk AI system

Deployer

Notify of algorithmic discrimination

Duty of care

Developer obligations

Disclosures to deployers

Public disclosures

Duty of Care

Standard

• Developer of high-risk AI system shall use reasonable care to protect consumers from any known or reasonably foreseeable risks of algorithmic discrimination arising from the intended and contracted uses of the high-risk AI system.

Rebuttable Presumption

• If developer follows law’s requirements, there is a rebuttable presumption that it used reasonable care.

Algorithmic Discrimination

Definition

• Any condition in which the use of an artificial intelligence system results in an unlawful differential treatment or impact that disfavors an individual or group of individuals on the basis of their actual or perceived age, color, disability, ethnicity, genetic information, limited proficiency in the English language, national origin, race, religion, reproductive health, sex, veteran status, or other classification protected under the laws of this state or federal law.

Does Not Include

• Self Testing: Use of high-risk AI system for sole purpose of self-testing to identify, mitigate, or prevent discrimination or otherwise ensure compliance with state and federal law;

• Promote Diversity: Use of high-risk AI system for sole purpose of expanding an applicant, customer, or participant pool to increase diversity or redress historical discrimination; or

• Private Club: Act or omission by or on behalf of a private club or other establishment that is not in fact open to the public, as set forth in Title II of the federal Civil Rights Act of 1964

Disclosures to Deployers

General Statement

• Describing reasonably foreseeable uses and known harmful or inappropriate uses of system

Documentation

Disclosing

• Summaries of data used to train system

• Known or reasonably foreseeable limits of system

• Purpose of system

• Intended benefits and uses

• Information necessary for deployer to comply with its obligations, including information necessary for impact assessments

Disclosures to Deployers

• How system was evaluated to mitigate algorithmic discrimination

• Data governance measures

• Intended outputs of system

Documentation

Describing

• Measures taken to mitigate known or reasonably foreseeable risks of algorithmic discrimination

• How system should and should not be used, and be monitored

Additional

Documentation

• Any additional documentation reasonably necessary to assist deployer in understanding outputs and to monitor performance of system

Public Disclosures

High-Risk AI Systems

• Types of high-risk AI systems developer has deployed or intentionally and substantially modified and makes available to a deployer or other developer

Risk Mitigation

• How developer manages known or reasonably foreseeable risks of algorithmic discrimination that may arise from types of highrisk AI systems

Notify of Algorithmic Discrimination

Within 90 days, developer must notify Attorney General and all known deployers or other developers of the high-risk AI system, of any known or reasonably foreseeable risk of algorithmic discrimination arising from intended uses of system that developer learns through (1) its testing or (2) credible reports from a deployer

Deployer Obligations

Duty of care

Deployer obligations

Duty of Care

Standard

• Deployer of high-risk artificial intelligence system shall use reasonable care to protect consumers from any known or reasonably foreseeable risks of algorithmic discrimination.

Rebuttable Presumption

• If developer follows law’s requirements, there is a rebuttable presumption that it used reasonable care.

Risk Management Policy and Program

Risk Identification and Mitigation for Deployment of High-Risk AI Systems

• Specify and incorporate principles, processes and personnel used to identify, document and mitigate known or reasonably foreseeable risks of algorithmic discrimination

Review and Update

• Be an iterative process subject to systematic, regular review and updating during the high-risk AI system’s life cycle

Reasonable

• Be reasonable considering guidance and standards from NIST AI framework or other framework recognized by AG; size and complexity of deployer; nature and scope of highrisk AI system; and sensitivity and data processed by system

Impact Assessment

Statement of purpose, intended uses cases, deployment context of, and benefits afforded by system

Whether system poses known or reasonably foreseeable risk of algorithmic discrimination and, if so, steps taken to mitigate risks

Description of categories of data system processes as inputs and outputs system produces

If deployer used data to customize system, an overview of categories of data

Any metrics use to evaluate performance and known limitations of system

Description of transparency measures taken, including measures taken to disclose system to consumers

Description of post-deployment monitoring and user safeguards

Annual Review

At least annually, the deployer or a third party contracted by the deployer, must review the deployment of each high-risk AI system to ensure that it is not causing algorithmic discrimination.

Consumer Disclosures

Notice

• Deployer must notify consumers that it has deployed system to make, or be a substantial factor in making, a consequential decision

Statement

• Provide statement disclosing purpose of system, nature of consequential decision, contact information, plain language description of system and how to access public statement (discussed below)

Opt Out of Profiling

• Providing information regarding right to opt out of profiling under Colorado Privacy Act, if applicable

Notice of Adverse Decision

Degree to which, and manner in which, system contributed to consequential decision

Type of data that was processed by system to make decision

Source of data

Adverse Decision Consumer Rights

Correction

• Opportunity to correct any incorrect personal data that system processed to make decision

Appeal

• Opportunity to appeal decision unless not in best interest of consumer

• If technically feasible, allow for human review

Public Disclosures

High-Risk Systems

Risk Mitigation

• Types of high-risk AI systems that deployer currently deploys

• How deployer manages known or reasonably foreseeable risks of algorithmic discrimination

Information

Collected and Used

• In detail, the nature, source, and extent of the information collected and used by the deployer

Small Business Exemption

Exemption

• Deployer does not have to comply with risk management policy and program, impact assessment, and public disclosure requirements if:

Requirements

• It has less than 50 full-time equivalent employees and does not use deployer’s own data to train system; and

• System is used for intended uses disclosed by developer, system continues learning based on data derived from sources other than deployer’s own data and deployer makes available any impact assessment that developer provides that includes required information

Notify of Algorithmic Discrimination

Within 90 days, deployer must notify Attorney General if deployer discovers high-risk AI system has caused algorithmic discrimination

AI Disclosure Obligations

AI Disclosure Obligations

Obligation

Deployers and certain developers that make available an AI system intended to interact with consumers shall disclose to consumers that interact with the system that it is an AI system

Exception

It would be obvious to a reasonable person that they are interacting with AI system

Exemptions

General Exemptions

Comply with law

Cooperate with law enforcement

Investigate/defend legal claims

Prevent, detect, respond to security incidents, fraud, etc. (except for use of facial recognition technology)

Engage in certain types of research

Effectuate product recall

Identify and repair technical errors that impair existing or intended functionality

Federal Exemptions

High-risk AI system approved, authorized, certified, cleared, developed or granted by federal agency

High-risk AI system in compliance with substantially equivalent standards established by federal agency

Conducting research to support application for approval or certification from federal agency

Performing work under, or in connection with, contract with certain federal departments unless used for employment or housing

AI system acquired by or for federal government or federal agency or department unless used for employment or housing

Industry-Specific Exemptions

Health

• Used by covered entity and is providing healthcare recommendations that (1) are generated by AI system, (2) require healthcare provider to take action to implement the recommendations and (3) are not high-risk.

Insurers

• Insurer, fraternal benefit society or developer of AI system used by insurer if subject to C.R.S. § 10-3-1104.9 and rules adopted by Commissioner of Insurance

Banking

• Bank, out-of-state bank, credit union, out-of-state credit union, or any affiliate or subsidiary if subject to regulatory regime that is substantially equivalent to law and includes auditing and risk mitigation

Enforcement

Enforcement

Attorney General Enforcement

• Civil penalty of up to $20,000 per violation

• No private right of action

• No district attorney enforcement

Affirmative Defense – Deployer will have the Burden

• (1) Discover and cure violation and is also

• (2) Otherwise in compliance with the NIST AI RMF / other recognized framework

Limits for Rebuttable Presumption & Affirmative Defense

Attorney General Rulemaking

Attorney General Rulemaking

• Not mandatory Permissive No timetable

Topics

1. Documentation and requirements for developers

2. Contents of and requirements for notices

3. Content and requirements for risk management policy and program

4. Content and requirements of impact assessments

5. Requirements for rebuttable presumptions

6. Requirements for affirmative defense

What to Do Now

What to Do Now

AI Inventory

• Understand current and considered uses

Contracts

• Negotiating terms between developers and deployers (e.g., indemnification, data usage/ownership)

Risk Management Program

• Map practices to a framework

• Assessing uses (e.g., impact assessments)

Other

AI Laws & Pending Bills

Other Laws & Pending Bills

Utah (SB 149)

• Disclosure obligations for use of generative AI

• Effective May 1, 2024

California

• Currently considering ten AI-related bills

• Summary of bills will be published on Byte Back after webinar

Turn static files into dynamic content formats.

Create a flipbook
Issuu converts static files into: digital portfolios, online yearbooks, online catalogs, digital photo albums and more. Sign up and create your flipbook.