State legislators across the country have been working to pass bills increasing consumer protections against potential biases in
Under the new legislation, developers are required to take “reasonable care” to protect consumers from the risks of algorithmic discrimination. They must disclose intricate details about the systems as well as documentation needed for conducting impact assessments to deployers of their products. Developers need to also publish a publicly-available statement outlining the various systems in operation and how risks of algorithmic discrimination are managed.
“This bill is among the first in the country to attempt to regulate the burgeoning artificial intelligence industry on such a scale. … I appreciate the sponsors’ interest in preventing discrimination and prioritizing consumer protection as Colorado leads this space,” Polis said in a May
The bill requires deployers across the state to implement a risk management policy and program for high-risk systems, conduct an impact assessment of the systems, annually review each system’s deployment to check for algorithmic biases and more.
Companies that deploy AI models will need to alert the state Attorney General to any instances of discrimination that are uncovered within 90 days. If they are in compliance with a recognized risk management framework and have taken actions to discover and correct such violations, the legislation gives them the right to raise an affirmative defense that could, if granted, negate any legal consequences.
Disclosures such as those made to the Attorney General, in addition to appeal processes, are key parts of the legislation. Deployers will need to provide consumers with opportunities to both correct personal data used in decisions made by high-risk systems and appeal (when feasible) adverse decisions.
The changes are set to go into full effect on Feb. 1, 2026.
“The goal was always to try to balance consumer protections with innovation,” said Sen. Robert Rodriguez, D-Colo., who was a prime sponsor of the bill. “[AI] is new, it’s cool, it’s neat and we all like it, but it’s got problems.”
Traditional AI had been front of mind for many banking executives as the
While consumer protection was the prime focus of the bill, other sponsors said they tried to draft it in such a way as to not hinder innovation in the state.
Rep. Brianna Titone, D-Colo., said including elements of transparency and working with tech companies to help sculpt the legislation was crucial to help set a proper balance between overregulation and lack of oversight.
“We have a lot of companies using AI in Colorado, and the last thing we want to do is stifle innovation of the companies that are doing a lot of great work. … It’s about trying to balance the negative unintended consequences of AI with that innovation,” Titone said.
Bank regulators have expressed differing views on the need for new laws to govern AI. In September, the
In January, officials with the
Financial institutions and fintech firms alike have called for increased clarity into what the impact to their industries could look like, but by and large felt prepared for any new compliance guidelines due to operating in an already highly-regulated environment.
Legal experts specializing in this technology say that when adopting AI-powered tools, recordkeeping and audit reviews are a best practice and in some cases a requirement due to the complexity of the models.
“One message is clear from new laws and enforcement: ignorance of an AI system’s operation is not an excuse, and regulators like the Federal Trade Commission are holding both the developers and users of AI technologies responsible for their effects,” said James Sherer, partner at BakerHostetler and co leader of the firm’s emerging technology and AI teams.
Alongside Colorado’s new bill, lawmakers in the state have passed another measure to
The path of innovation going forward is uncertain and will vary widely depending on each use case.
“We are likely to see some form of tollgate approach, where low risk use cases are encouraged with minimal oversight, moderate risk use cases receive increased scrutiny, transparency and reporting and high-risk use cases involving substantial or fundamental rights [are subject to] strict requirements and in certain cases, prohibitions,” said Joel Wuesthoff, managing director at the global consulting firm Protiviti.