Back
conceptUpdated Apr 18, 2026
AI System Deployer
ai-actorsdeploymentcolorado-law
- Jurisdiction
- US-CO
- Effective
- 2026-02-01
Under the colorado-ai-act-sb24-205, a deployer is an entity that uses a high-risk-artificial-intelligence-system to make or substantially influence consequential decisions affecting consumers. Deployers bear primary responsibility for ensuring AI systems operate without causing algorithmic-discrimination.
Key Obligations
Deployers must:
- Risk Management: Implement comprehensive risk management policies and programs
- Impact Assessment: Complete impact assessments before deploying high-risk AI systems
- Annual Reviews: Conduct yearly reviews to ensure systems are not causing algorithmic discrimination
- Consumer Notification: Inform consumers when AI systems make or substantially factor into consequential decisions
- Data Correction: Provide consumers opportunities to correct incorrect personal data processed by the system
- Appeal Process: Offer consumers the ability to appeal adverse decisions, with human review when technically feasible
- Public Transparency: Publish statements about deployed systems, risk management practices, and data collection
- Incident Reporting: Disclose discovered algorithmic discrimination to the colorado-attorney-general within 90 days
Reasonable Care Standard
Deployers must use reasonable care to protect consumers from algorithmic discrimination. Compliance with specified provisions creates a rebuttable presumption of reasonable care.
Consumer Rights
The law establishes specific rights for consumers interacting with deployer-operated AI systems, including notification, correction, and appeal rights, reflecting a consumer-protection focused approach to AI regulation.
Neighborhood