Effective 30 June 2026 Β· statute live

Colorado β€” Colorado AI Act tracker

Colorado was the first US state to pass a comprehensive, horizontal AI statute. Senate Bill 24-205 β€” the Colorado Artificial Intelligence Act β€” imposes duties on both developers and deployers of high-risk artificial intelligence systems used to make consequential decisions. After a special legislative session failed to reach a broader compromise, Governor Polis signed SB 25B-004 on 28 August 2025, pushing the effective date from 1 February 2026 to 30 June 2026. The substance of the 2024 Act was not otherwise amended. This page tracks what the statute actually requires, what 'reasonable care' looks like in practice, and what Regula can tell you about your codebase today.

Last updated: 2026-04-08  Β·  Maintained by: Regula (open source)  Β·  Report a correction

Live tracker

Statute
SB 24-205 (Colorado Artificial Intelligence Act), as amended by SB 25B-004  VERIFIED
Effective date
30 June 2026 (delayed from 1 February 2026 by SB 25B-004)  VERIFIED
Scope
High-risk AI systems used to make consequential decisions affecting Colorado consumers  VERIFIED
Developer duties
Reasonable care Β· technical documentation Β· public statement Β· notices to AG and deployers  VERIFIED
Deployer duties
Risk management policy Β· initial + annual impact assessments Β· consumer notices Β· website disclosure  VERIFIED
Enforcement
Colorado Attorney General β€” exclusive civil enforcement, no private right of action  VERIFIED
Rebuttable presumption / affirmative defence
Available to parties demonstrating compliance with a recognised AI risk management framework  VERIFIED
Implementing regulations
Colorado AG rulemaking in progress β€” no final rules published as of 2026-04-08  PENDING

What the Colorado AI Act actually does

The Colorado AI Act is the first US state statute to impose horizontal, sector-agnostic duties on developers and deployers of AI systems used in consequential decisions. It borrows structure from the EU AI Act but is narrower in two important ways: it applies only to high-risk systems (defined by consequential-decision use, not by Annex III category), and it enforces only through the Colorado Attorney General with no private right of action.

The statute's core concepts:

The Act's design choice is important: consequential-decision use, not technical category, triggers coverage. A model trained for credit underwriting is only covered when actually deployed to make lending decisions for Colorado consumers. A general-purpose model is not itself in scope unless and until a deployer puts it to a consequential-decision use.

Developer duties under SB 24-205

A developer of a high-risk AI system doing business in Colorado must, on and after 30 June 2026:

  1. Use reasonable care to protect consumers from any known or reasonably foreseeable risks of algorithmic discrimination arising from the intended and contracted uses of the system.
  2. Provide a statement disclosing the types of high-risk AI systems the developer has developed or substantially modified, and how the developer manages known or reasonably foreseeable risks of algorithmic discrimination.
  3. Make technical documentation available to deployers β€” including a general description of the system, its intended uses and benefits, reasonably foreseeable uses, and harmful or inappropriate uses; the type of data used to train the system and the limitations of the data; the purpose of the system, intended outputs, and how the outputs should be interpreted and used; measures taken to mitigate risks of algorithmic discrimination; and how the system should be used, not be used, and monitored when used to make a consequential decision.
  4. Notify the Colorado Attorney General and known deployers within 90 days of discovering that the developer's system has caused, or is reasonably likely to have caused, algorithmic discrimination.
  5. Publish a public statement summarising the types of high-risk systems the developer has developed or substantially modified that are currently available to deployers.

Reasonable-care compliance is evidenced by following a recognised AI risk management framework (the statute explicitly references the NIST AI RMF and ISO/IEC 42001 as examples). Developers who document their adherence to such a framework qualify for a rebuttable presumption of reasonable care.

Deployer duties under SB 24-205

A deployer of a high-risk AI system doing business in Colorado must, on and after 30 June 2026:

  1. Implement a risk management policy and program to govern the deployer's use of the high-risk system. The policy must be reasonable considering the deployer's size and complexity and the nature of the system's use. Adherence to a recognised risk management framework (NIST AI RMF, ISO/IEC 42001) is again evidence of reasonableness.
  2. Complete an impact assessment before deploying the system, and annually thereafter, and within 90 days of any intentional and substantial modification. The assessment must cover the purpose and intended use, the categories of data used, the benefits, the known or reasonably foreseeable risks of algorithmic discrimination, the steps taken to mitigate those risks, and the post-deployment monitoring and safeguards.
  3. Notify consumers before a high-risk AI system is used to make a consequential decision about them, and provide a statement of the purpose of the system, the nature of the decision, and the deployer's contact information.
  4. Provide an adverse-decision notice where a consequential decision is adverse to the consumer, including the principal reasons for the decision, an opportunity to correct incorrect personal data, and an opportunity to appeal for human review.
  5. Publish a website disclosure summarising the types of high-risk AI systems the deployer currently deploys, how the deployer manages known or reasonably foreseeable risks of algorithmic discrimination, and the nature and source of information collected and used.
  6. Notify the Colorado Attorney General within 90 days of discovering algorithmic discrimination caused by the deployed system.

Small deployers (those with fewer than 50 employees who do not use their own data to train the system) have a narrower set of obligations.

What Colorado developers and deployers should do today

The 30 June 2026 effective date gives organisations a short runway. The practical sequence:

  1. Map your Colorado exposure. If your system is used β€” or is contracted to be used β€” to make consequential decisions affecting Colorado consumers, you are in scope regardless of where you are headquartered. Enumerate each such deployment path.
  2. Adopt a recognised AI risk management framework. NIST AI RMF 1.0 and ISO/IEC 42001:2023 are explicitly referenced by the statute. Pick one and document your implementation. This is how you earn the rebuttable presumption of reasonable care.
  3. Produce developer technical documentation for every high-risk system. Regula's regula docs and regula conform commands generate Annex IV-style scaffolds that cover most of the Colorado disclosure categories β€” training data description, intended use, foreseeable misuse, mitigation measures, monitoring. The EU template goes further than Colorado requires, which is fine.
  4. Run the Article 14 oversight trace. Colorado's consumer adverse-decision notice requires an opportunity to appeal for human review. Regula's regula oversight command traces AI model outputs through call chains cross-file and flags where a human review gate is absent. This is the exact control Colorado expects.
  5. Stand up a deployer impact assessment process. The initial assessment must be complete before deployment on 30 June 2026. Annual thereafter. Regula's regula gap and regula evidence-pack commands provide inputs that map onto the statute's assessment categories.
  6. Monitor the Colorado Attorney General's rulemaking. The AG is empowered to issue implementing regulations. As of 2026-04-08 no final rules have been published. This page will update when they are.

Where Regula fits for Colorado teams

Regula was built primarily against the EU AI Act, but the Colorado AI Act's developer and deployer categories map cleanly onto work Regula already does. Practical starting commands:

pipx install regula-ai

regula discover .              # AI systems present in the project
regula check .                 # Risk indicators across all frameworks
regula gap --project .         # Gap assessment (reuses Art 9-15 structure)
regula oversight .              # Cross-file human-review gate detection
regula docs .                  # Technical documentation scaffold
regula conform .               # Evidence pack β€” maps to deployer impact assessment
regula sbom --ai-bom .         # AI Bill of Materials (CycloneDX 1.7)

The NIST AI RMF mapping that gets you the Colorado rebuttable presumption is already shipped in Regula's references/framework_crosswalk.yaml. A single regula check run reports findings against the EU AI Act, NIST AI RMF, ISO/IEC 42001, OWASP LLM Top 10, and the other frameworks Regula supports.

What Regula does not do for Colorado: issue your consumer notices, track your deployer impact assessment history, or act as a compliance certificate. The statute is enforced by the Colorado AG against real organisations β€” Regula helps you produce the evidence, not the legal conclusion.

What we are tracking for the Colorado page

This page will be updated as the Colorado landscape moves. Specifically we are watching for:

  1. Colorado Attorney General implementing rules under SB 24-205. The AG is empowered to issue regulations defining documentation formats, notice content, and risk management framework criteria. No final rules as of 2026-04-08.
  2. Further legislative amendments. SB 25B-004 was intended to be one of multiple corrective bills. Further amendment is expected before the 30 June 2026 effective date.
  3. Enforcement signals β€” the first AG complaint, consent decree, or public enforcement action under the Act.
  4. Other US state statutes following Colorado's lead. Texas (TRAIGA), California (SB 53 successors), and Connecticut have active AI legislation. We will add state pages where statutes are enacted.

If you spot something we have missed, please open an issue.

Frequently asked questions

When does the Colorado AI Act take effect?

30 June 2026. The original effective date of 1 February 2026 was delayed by SB 25B-004, signed by Governor Polis on 28 August 2025. The substance of the 2024 Act was not otherwise amended.

Who does the Colorado AI Act apply to?

Developers and deployers of high-risk AI systems doing business in Colorado. 'High-risk' is defined by use β€” systems that make or are a substantial factor in making a consequential decision affecting Colorado consumers in education, employment, finance, essential government services, healthcare, housing, insurance, or legal services.

What is a 'consequential decision' under the statute?

A decision that has a material legal or similarly significant effect on access to, or the cost or terms of, education enrollment or opportunity, employment or employment opportunity, a financial or lending service, an essential government service, healthcare services, housing, insurance, or a legal service. Narrow carve-outs exist for anti-fraud, cybersecurity, and certain generative AI uses.

Is there a private right of action under the Colorado AI Act?

No. Enforcement is exclusive to the Colorado Attorney General. Individuals harmed by algorithmic discrimination retain their rights under existing federal and state civil rights statutes, but they cannot sue under SB 24-205 directly.

Does the Colorado AI Act recognise a compliance safe harbour?

Yes β€” a rebuttable presumption of reasonable care is available to parties who document adherence to a recognised AI risk management framework. The statute explicitly references the NIST AI RMF and ISO/IEC 42001 as examples. This is not an absolute defence; the Attorney General can rebut it with evidence of actual algorithmic discrimination.

Does Regula cover Colorado AI Act obligations?

Partially. Regula's NIST AI RMF mapping, Article 14 cross-file oversight trace, technical documentation generator, and gap assessment all produce evidence that maps onto Colorado's developer and deployer duties. Regula does not generate consumer notices, track impact assessment history over time, or replace legal advice. See the 'Where Regula fits' section above for practical commands.

Sources

Related reading