Colorado’s Landmark AI Law Coming Online: What Developers and Deployers Should Know
Colorado is about to become the national test case for comprehensive artificial intelligence (AI) consumer protection, and the countdown to implementation is already ticking.
The state’s first-in-the-nation AI consumer protection statute, Senate Bill 24-205, establishes affirmative duties for both developers and deployers of “high-risk” AI systems to use reasonable care to prevent algorithmic discrimination in “consequential decisions.”
This alert summarizes the statute’s coverage and key requirements, covers past and potential policy shifts, highlights litigation and enforcement considerations, and provides a practical “to-do” list for organizations that develop, license, procure or deploy high-risk AI systems in Colorado. Any company utilizing these systems should pay close attention to the bill’s requirements ahead of the June 30 go-live date.
UNCERTAIN POLITICAL AND POLICY LANDSCAPE
Though Colorado stands to be the first state with comprehensive AI consumer protection regulations, the state may scrap the whole thing and start over. Or it may be preempted by an executive order issued by President Trump last December attempting to limit state AI regulations.
Although the law was originally scheduled to apply beginning Feb. 1, 2026, the law’s effective date was subsequently pushed to June 30, 2026, during a special session last fall through Senate Bill 25B-004, which was approved by the legislature and signed by Gov. Jared Polis on Aug. 28, 2025.
With AI companies and their customers calling the impending regulations too onerous while consumer advocacy groups have claimed they fall short, consensus has been elusive and compromise difficult. Gov. Jared Polis convened a new working group last October to rework the statute, the second time he’s done so since the original bill passed in 2024, when he signed it despite publicly expressing concern over the bill.
The first task force’s findings revealed more areas of disagreement than consensus, calling for “creativity” in revising the law and urging stakeholders and policymakers to continue discussions. Polis also added AI regulation to lawmakers’ task list when he called a special session last year to tackle several urgent policy issues related to the state budget and federal funding—resulting in the extension of the law’s effective date and another opportunity for stakeholders to find compromise during the 2026 regular session.
The working group has been convening weekly and is working on consensus language for a repeal-and-replace bill, which is expected to be shared publicly sometime in March.
That timing is likely to coincide with another dynamic at play, the federal preemption and priorities, both at play when it comes to AI regulation. The White House has directed the Department of Commerce to compile a list of “onerous” state AI laws by mid-March, and Colorado is likely to be featured there. If the Department of Justice argues that SB 24-205 is preempted by President Trump’s executive order, there is no guarantee the governor or state attorney general will take the political risk of defending the much-maligned law.
For now, companies impacted should prepare for SB 24-205’s implementation but be aware that attempts to repeal or rewrite the law are likely to continue, and that any “fix” that comes this session will likely do so at the final hour in early May.
WHY THIS MATTERS TO REGULATED INDUSTRIES
SB 24-205 is not a “tech-only” bill. It is written to reach core decision-making functions across heavily regulated sectors, e.g., financial services, insurance, health care, higher education, housing, essential government service, legal service and employment—where automated scoring, screening, triage, fraud detection, eligibility determination and risk classification are already embedded in operations.
The statute’s central concept, “algorithmic discrimination,” is explicitly tethered to protected classifications recognized under state and federal law, which means compliance planning should be integrated with existing consumer protection, civil rights and sector-specific regulatory programs, not siloed as a pure IT initiative.
WHO IS COVERED
SB 24-205 distinguishes between two categories of covered entities:
- Developers: entities that develop or substantially modify a high-risk artificial intelligence system.
- Deployers: entities that use a high-risk artificial intelligence system to make, or as a substantial factor in making, a consequential decision concerning a Colorado consumer.
The statute is structured to reach both in-state and out-of-state companies. The key jurisdictional question is not where a model was built, but whether it is deployed in connection with consequential decisions concerning Colorado consumers (e.g., renters). If you develop or deploy covered AI systems to consumers in Colorado, this law will affect your business.
KEY STATUTORY CONCEPTS
SB 24-205 focuses on “high-risk” systems used in consequential decisions. In practice, that category is likely to capture many systems that:
- materially influence eligibility, access, pricing or adverse determinations in high-stakes contexts, or
- are marketed or configured to do so.
In short, the statute does not prohibit the use of high-risk AI systems in consequential decisions, but it does condition their lawful use on documented risk governance, impact assessment and reasonable-care safeguards designed to prevent algorithmic discrimination.
The statute defines algorithmic discrimination as unlawful differential treatment or impact that disfavors an individual or group on the basis of protected classifications (e.g., race, sex, disability, national origin, religion, etc.).
In practical terms, “consequential decisions” are decisions that meaningfully affect a person’s livelihood, finances, health care, housing or educational access. For example, this can include screening job applicants, approving or denying loans or insurance coverage, setting insurance premiums, determining eligibility for housing or public benefits, or triaging patients for care. Businesses should not assume a system falls outside the law simply because a human signs off at the end. If an AI tool scores, ranks, filters, recommends or otherwise meaningfully influences the outcome, even if a human reviewer reviews that output, the AI may be considered a “substantial factor” in the decision and therefore subject to the statute’s requirements.
CORE OBLIGATIONS FOR DEVELOPERS
On and after June 30, 2026, developers must use reasonable care to protect consumers from known or reasonably foreseeable risks of algorithmic discrimination arising from intended and contracted uses of a high-risk AI system. The statute provides a rebuttable presumption of reasonable care where specified practices are followed. Put another way, developers are not required to guarantee perfect outcomes, but they are expected to proactively test, document and address discrimination risks before and during deployment—and to be able to show their work if regulators ask. Examples include:
- Testing and analysis: Conduct documented bias and performance testing before release and at defined intervals. This may include disparate impact testing across protected classes, evaluation of training data for representativeness, validation of model outputs under different scenarios, and written records of any threshold or weighting adjustments made to address identified risks.
- Disclosures to deployers: Provide clear written guidance describing intended use, known limitations, performance metrics and required human oversight. Developers should communicate appropriate use parameters and foreseeable risks so deployers understand how the system should and should not be used in consequential decision-making.
- Incident and risk reporting: Implement a formal escalation process for investigating credible evidence of discriminatory outcomes. When required by statute, provide timely notice to the Colorado attorney general and affected deployers, and document investigative steps and remediation actions taken.
- Governance and documentation: Maintain organized records sufficient to demonstrate reasonable care. This may include impact assessments, testing results, data source descriptions, version histories, internal review approvals and corrective action logs. Records should be structured so the company can clearly show how risks were identified, evaluated and mitigated over time.
Developers should assume regulators will evaluate “reasonable care” using a common-sense record: what you tested, what you found, what you disclosed and what you did when issues emerged.
CORE OBLIGATIONS FOR DEPLOYERS
Deployers, often the regulated business using AI in operations, carry operational governance duties. SB 24-205 requires deployers to use reasonable care to protect consumers from known or reasonably foreseeable risks of algorithmic discrimination. A deployer may obtain a rebuttable presumption of reasonable care by implementing the statute’s risk management policy and program requirements and by completing impact assessments on statutory timelines (as updated by SB 25B-004).
- Risk management policy and program: Adopt a written governance framework for each high-risk AI system. This should identify responsible personnel, define review and approval workflows, outline discrimination risk identification procedures, and integrate AI oversight into existing compliance functions such as fair lending, equal employment, consumer protection or privacy programs.
- Impact assessments: Conduct a documented assessment before deployment and at defined intervals thereafter, including after material model changes. Assessments should describe the system’s purpose, data inputs, decision logic at a high level, testing results, identified risks and mitigation steps. Maintain records in a format that can be produced if requested by regulators.
- Consumer notices: Develop clear disclosures informing consumers when AI is used to make or materially influence consequential decisions. Notices should explain the nature of the decision, the role of the AI system and available avenues for further information or review.
- Appeal and human review processes: Implement procedures that allow consumers to request additional information, contest adverse decisions and obtain meaningful human review where required. Ensure staff responsible for review are trained and empowered to override or reassess automated outputs when appropriate.
- Ongoing monitoring: Establish periodic review protocols to evaluate system performance and potential discriminatory outcomes after deployment. Monitoring should include defined metrics, escalation triggers, documented findings and corrective action plans when risks are identified.
ENFORCEMENT AND LITIGATION POSTURE
SB 24-205 is enforced by the Colorado Attorney General under C.R.S. § 6-1-1706. The statute does not create a private right of action. For many organizations, this shifts risk from class-action exposure to (1) AG investigations and enforcement actions, (2) parallel scrutiny by sector regulators, and (3) “follow-on” private litigation under other theories (e.g., consumer deception, contract, negligence, civil rights statutes) informed by AI-related facts.
Companies should treat AI governance artifacts (impact assessments, monitoring logs, policy documents) as potential exhibits in a future enforcement record. That does not mean minimizing documentation; it means creating disciplined, accurate, defensible documentation with counsel involved at key points, as well as appropriately balanced retention policies.
COMPLIANCE ROADMAP FOR THE NEXT 90–120 DAYS
Organizations should consider the following sequence to prepare for June 30, 2026:
- Inventory: map all AI/algorithmic systems that touch consequential decisions (including vendor tools embedded in HR, underwriting, fraud, compliance, call centers, claims and patient intake).
- Classify: identify which systems likely qualify as “high-risk” and where AI is a “substantial factor” in outcomes.
- Contracting: update procurement and vendor contracts to require (a) developer disclosures needed for deployer compliance, (b) testing and documentation support, (c) audit cooperation, and (d) indemnity/limitation-of-liability terms aligned with regulatory risk. See Brownstein’s prior alert here.
- Governance: implement a deployer risk management policy and program; assign accountable owners; define review cadence; and integrate with existing compliance controls (fair lending, EEO, UDAAP/consumer protection, privacy/security).
- Impact assessments: build or procure an assessment template and execute assessments for high-risk systems on the statute’s timeline (including review triggers for substantial modifications).
- Consumer-facing workflows: draft disclosures, build notice delivery mechanisms, and operationalize consumer request and appeal processes.
- Monitoring and incident response: define testing/monitoring metrics, thresholds, escalation paths and corrective action protocols; implement an AI response plan and team; and ensure coordination with cybersecurity and privacy incident response. Authorize personnel to shut down the AI system if it goes off the rails.
- Adopt the National Institute of Standards and Technology (NIST) AI Risk Management Framework or ISO/IEC 42001 as an affirmative defense against AG enforcement. Note: The act states that both are necessary for an affirmative defense, but we believe the “and” should be an “or” because companies normally adopt an ISO standard or a NIST standard, but not both. Adopting both would be redundant. Adopting the ISO or NIST standard may depend on whether you currently have ISO:27001/27701 or NIST CSF 2.0 as your cybersecurity framework.
WHAT’S NEXT
As of now, SB 24-205’s requirements take effect on June 30, 2026. Between now and then, stakeholders should monitor (1) Colorado attorney general rulemaking and guidance, and (2) additional legislative proposals that may either refine or overhaul the law’s definitions, safe harbors or implementation mechanics. Stakeholders should also monitor federal preemption developments.
Contact a member of Brownstein’s State Government Relations or Political & Public Law teams to discuss how SB 24-205 may apply to your organization’s AI systems, contracting strategy and compliance program.
This document is intended to provide you with general information regarding Colorado’s AI law. The contents of this document are not intended to provide specific legal advice. If you have any questions about the contents of this document or if you need legal advice as to an issue, please contact the attorneys listed or your regular Brownstein Hyatt Farber Schreck, LLP attorney. This communication may be considered advertising in some jurisdictions. The information in this article is accurate as of the publication date. Because the law in this area is changing rapidly, and insights are not automatically updated, continued accuracy cannot be guaranteed.
Recent Insights
Read MoreColorado AI law focuses on governance, not gadgets
Client Alert | March 23, 2026Fifth Circuit Refuses to Stay District Court Decision Voiding New HSR Rules
Client Alert | March 20, 2026AI Governance Takes Shape: Breaking Down Washington’s Latest AI Frameworks
Presentation | March 18, 2026State of Play
Client Alert | March 17, 2026FTC Seeks Comments on Rental Housing Fees and Negative Option Marketing
Client Alert | March 17, 2026Trump Issues Executive Orders on Mortgage Credit, Housing Construction
You have chosen to send an email to Brownstein Hyatt Farber Schreck or one of its lawyers. The sending and receipt of this email and the information in it does not in itself create and attorney-client relationship between us.
If you are not already a client, you should not provide us with information that you wish to have treated as privileged or confidential without first speaking to one of our lawyers.
If you provide information before we confirm that you are a client and that we are willing and able to represent you, we may not be required to treat that information as privileged, confidential, or protected information, and we may be able to represent a party adverse to you and even to use the information you submit to us against you.
I have read this and want to send an email.