Does the Colorado AI Act Apply to Your AI Tools?

Modern office desk with a laptop showing email and an AI assistant panel, overlooking the Denver Tech Center skyline with the Rocky Mountains in the background.

This article is part of a seven-part series on the Colorado AI Act (SB 24-205). This is Post 2. In Post 1, I covered what the law is and why it matters. In this post, I want to answer a more practical question: what AI is actually covered, and what is not.


Status note (as of April 2026): SB 24-205 is currently scheduled to take effect on June 30, 2026. The law is also receiving increased scrutiny, including an active lawsuit and public reporting that legislators are working on a possible amendment in the current session. This post reflects what is known today and is intended for general information only. Businesses should monitor updates and consult qualified legal counsel on final compliance obligations.


Quick navigation: Skip to FAQ


You are probably already using AI

I was working with an advisory firm in Centennial recently. The managing partner told me they did not use AI.

We spent twenty minutes going through his software stack.

His CRM had AI-driven lead scoring. His HR platform used AI to rank resumes. His Microsoft 365 environment had Copilot-style features quietly turned on. Two of his staff were using ChatGPT daily for client research.

He was not wrong that his firm had not “adopted AI.” He just did not realize how much AI had already arrived without anyone making a formal decision.

That is the situation most professional services firms in South Denver are in right now. The question is not “do we use AI?” The question is whether you use AI in a way the Colorado AI Act treats as high-risk.


What does Colorado mean by “AI system”?

The law uses a broad definition.

In plain English: if software can generate outputs like predictions, recommendations, classifications, or decisions, and those outputs can influence real-world outcomes, it may qualify as an AI system under the statute.

This matters because you do not have to build AI to be in scope. You can be in scope simply by using a tool that includes AI features. And many of the platforms that law firms, financial advisors, and consultancies across Greenwood Village and the Denver Tech Center rely on every day now include AI whether you asked for it or not.


The real trigger: high-risk AI used for consequential decisions

The Colorado AI Act is not trying to regulate every use of AI. It is focused on AI used in what the law calls high-risk contexts.

Here is how to think about it simply.

What is a “consequential decision”?

A consequential decision is one that materially affects a person’s access to important opportunities, services, or benefits.

The law calls out specific areas:

  • Employment and hiring
  • Housing
  • Lending and credit
  • Insurance
  • Healthcare
  • Education
  • Legal services
  • Essential government services

If you run a professional services firm in Littleton, Lone Tree, or Highlands Ranch and you are thinking “some of those touch my business,” you are probably right. Hiring decisions alone put many firms in scope.

What makes an AI system “high-risk”?

An AI system becomes high-risk when it makes, or is a substantial factor in making, a consequential decision.

That can happen two ways:

  • Direct decisioning: the system produces the decision automatically.
  • Decision support: the system produces a score, ranking, or recommendation that a human relies on.

That second category is where most businesses get caught. Many firms assume that if a person makes the final call, they are out of scope. That is not a safe assumption. If the AI output materially shapes the decision, the law may still treat it as high-risk.


Examples: what is likely high-risk, what is likely not, and what falls in between

These are not legal determinations. They are meant to help you think clearly about where the lines tend to fall.

Likely high-risk

  • Resume screening or candidate ranking tools used in hiring
  • Tenant screening tools used for housing decisions
  • Loan or credit scoring and underwriting assistance
  • Insurance risk scoring or claims triage that affects eligibility or outcomes
  • Healthcare prioritization tools that affect access to services
  • Education admissions, placement, or scholarship screening tools

Likely not high-risk (usually)

  • Marketing copy assistants
  • Meeting notes summarization
  • Chatbots answering general FAQs with no influence on eligibility, pricing, or access
  • Internal brainstorming and drafting support that does not feed into consequential decisions

Gray area (depends on how you use it)

  • Customer “risk” or “trust” scoring that leads to denying service
  • Client segmentation that changes pricing, eligibility, or access to services
  • Fraud detection models that trigger account closure or service denial
  • Tools that summarize client data and suggest next actions for an advisor or attorney

One rule of thumb I use with firms across South Denver: if someone can be meaningfully harmed by the decision, treat it as consequential until you prove otherwise.


Common mistake: “internal use” does not automatically mean “low risk”

I hear this a lot: “We only use AI internally.”

That can be true and still be beside the point.

The deciding factor is not where the tool runs. It is what the output is used for.

Two examples:

  • If you use AI internally to draft a marketing email, that is typically low risk.
  • If you use AI internally to score job candidates, rank applicants, or summarize resumes in a way that shapes the hiring decision, that can be high-risk even though the tool never touches a client-facing screen.

I have seen this in firms across Greenwood Village and the DTC. The tool is internal. The impact is not.

Do not use “internal” as a shortcut. Use the actual question: does this AI influence a consequential decision?


Developer vs deployer: a quick reminder

If you read Post 1, you will remember the law distinguishes between two roles:

  • Developers: the organizations that build or substantially modify an AI system.
  • Deployers: the organizations that use a high-risk AI system.

Most small and mid-size businesses I work with are deployers. They did not build the tool. They bought it or subscribed to it. They turned it on. That still creates obligations if the system is used in a high-risk context.

And using a vendor-built tool does not transfer your responsibilities to the vendor. I covered this in Post 1 and it will come up again in Post 6 when we talk about vendor risk.


Where the NIST AI Risk Management Framework fits

If you are wondering how a business is supposed to manage all of this responsibly, you are asking the right question.

The law references recognized risk management frameworks as part of demonstrating what it calls “reasonable care.” One widely used approach is the NIST AI Risk Management Framework (AI RMF).

You do not need to become an AI research lab. At a high level, NIST gives you a simple structure:

  • Govern: Assign ownership. Set rules for how AI is used.
  • Map: Identify where AI is being used and what it affects.
  • Measure: Test for problems. Monitor for issues.
  • Manage: Fix what needs fixing. Respond when something goes wrong.

That is the mindset. It is not about creating a mountain of paperwork. It is about being able to show that you took reasonable, structured steps to prevent harm.

For most professional services firms in Colorado, this is achievable. It just needs to be started.


What should you do next?

Do not guess. Get visibility.

The practical next step is to inventory your AI use, identify likely high-risk use cases, and decide what needs to be governed more carefully.

I cover a first 30-day action plan in Post 3: Your First 30 Days, AI Assessment, Inventory, and Policy Basics. That post walks through the practical steps, including how to put a basic AI use policy in place. If you are struggling with where to start, Post 3 is the place to begin.


How we can help

I work with law firms, financial advisors, consultancies, and other professional services firms across South Denver. From Centennial and Littleton to Greenwood Village, Lone Tree, Highlands Ranch, and the Denver Tech Center.

If you want to know whether the Colorado AI Act applies to your business, and which tools and workflows are most likely to create risk, we can help you sort that out.

We provide AI compliance assessments that are practical, scoped, and designed for Colorado businesses. The goal is clarity. What AI is in use. Which use cases are high-risk. What vendor documentation you need. And what steps to take next.

Book an AI Assessment →


Frequently Asked Questions About What AI Is Covered

Is all AI covered by the Colorado AI Act?

No. The law is focused on high-risk AI systems used in consequential decisions. Many common uses of AI, like drafting content, summarizing meetings, or answering general questions, are typically not high-risk unless they feed into decisions about employment, housing, lending, insurance, healthcare, education, or legal services.

What makes an AI tool “high-risk”?

A tool becomes high-risk when it makes, or is a substantial factor in making, a consequential decision. If the AI produces a score, recommendation, or output that a person relies on to make a decision in one of the covered areas, it may be high-risk even if a human has the final say.

Does a customer service chatbot count as high-risk?

Usually not, if it is just answering general questions. But if the chatbot determines eligibility, routes people differently based on their characteristics, or materially influences access to services or benefits, it can move into high-risk territory. The question is always whether it affects a consequential decision.

What if a human makes the final decision, not the AI?

Having a human in the loop does not automatically remove risk. If the AI output is a substantial factor in the final decision, the use case may still be treated as high-risk under the law. The question is how much the human relies on the AI output when making the call.

Do we have to adopt the NIST framework to comply?

Not necessarily. NIST AI RMF is one recognized framework that can help structure your approach to AI risk management. The law does not mandate a specific framework. What matters is being able to demonstrate reasonable care through a defensible, structured approach. NIST is a practical place to start.


Disclaimer: This article is provided for general informational purposes only and is not legal advice. Businesses should consult qualified legal counsel regarding their specific compliance obligations under SB 24-205 or any other applicable law.

Back to Blog

Share:

Related Posts

Project manager reviewing digital blueprints for a Denver jobsite.

Cybersecurity for Construction in South Denver: That $10.5 Trillion Threat Is Targeting Your Job Sites

October is Cybersecurity Awareness Month This October, Cybersecurity Awareness Month. there’s a…

Read More

Cybersecurity for Law Firms in South Denver: Don’t Let a Digital Flat Tire Derail Your Practice

October is Cybersecurity Awareness Month A funny thing happened on the way…

Read More
Employees in a South Denver office participating in cybersecurity awareness training session.

Security Awareness Training in South Denver: Empower Your Team, Protect Your Business

October is Cybersecurity Awareness Month Here in South Denver, we are surrounded…

Read More