Why Every AI Governance Framework Starts with an Inventory
Here's a question that should make any CISO uncomfortable: how many AI systems are running inside your organization right now? Not approximately. Exactly. If you can't answer that with a specific number and a list of names, you don't have an AI governance program. You have an intention.
Every major AI governance framework treats the AI system inventory as the foundational artifact. ISO/IEC 42001:2023 requires defining the scope of your AI Management System under Clause 4.3, which means identifying every AI system that falls within that scope. NIST AI RMF expects organizations to establish context about their AI systems under GOVERN 1.1 before they can map, measure, or manage risk. The Colorado AI Act (SB 24-205 as amended) doesn't use the word "inventory," but deployer obligations under C.R.S. 6-1-1702 require documented governance of high-risk AI systems. You can't document governance of something you haven't cataloged.
The logic is straightforward. Risk classification requires knowing what exists. Controls mapping requires knowing what's classified. Audit evidence requires knowing what's controlled. Skip the inventory, and the entire governance chain collapses. That's not a theoretical concern. I've seen organizations attempt ISO 42001 certification with a vague awareness that "we use some AI tools" and then spend three months just figuring out what was in scope. Those three months weren't implementation. They were expensive discovery.
Not sure where you stand? The free AI governance readiness assessment scores your organization across five governance domains in about 15 minutes. It won't replace the inventory, but it'll show you which gaps are most urgent.
Run the free assessment
What Counts as an AI System (and What Doesn't)
Scoping is where most teams get stuck. The EU AI Act defines an AI system under Article 3(1) as a machine-based system designed to operate with varying levels of autonomy, that may exhibit adaptiveness, and that infers from inputs to generate outputs such as predictions, content, recommendations, or decisions. The Colorado AI Act uses a similar functional definition under C.R.S. 6-1-1701.
In practice, the test is simpler than the legal language suggests. Does the system use machine learning, large language models, or autonomous decision logic to produce outputs that influence business decisions or affect people? If yes, it's in scope until you've justified excluding it.
| Tool | AI System? | Why / Why Not |
|---|---|---|
| Customer support chatbot (GPT-4o backend) | Yes | LLM-based, generates responses that affect customer experience |
| Resume screening tool (ML scoring) | Yes | ML model making consequential decisions about employment |
| GitHub Copilot | Yes | LLM-based code generation, processes source code |
| RPA bot moving data between fields | No | Rule-based automation, no inference or learning |
| Excel macro calculating totals | No | Deterministic formula, no ML component |
| CRM's built-in lead scoring | Yes | ML-based prediction, even if embedded in a larger platform |
That last row catches people off guard. Your CRM vendor's AI features count. Your cloud provider's anomaly detection counts. Embedded AI doesn't get a free pass because it's bundled inside something else. If the AI component influences decisions or outputs that affect people, it belongs in the inventory.
The Shadow AI Problem — What You Don't Know Can Hurt You
Could you name every AI tool your engineering team used last week? What about marketing? What about the intern who pasted customer data into ChatGPT to draft a summary?
Shadow AI is AI tooling that employees adopt without formal IT approval or governance oversight. It isn't a fringe problem. A 2024 Software AG survey of 6,000 knowledge workers across multiple countries found that 50% were using unapproved AI tools at work, and 48% said they'd continue even if explicitly banned. By late 2025, an UpGuard report put the figure even higher: over 80% of workers, including nearly 90% of security professionals, were using unapproved AI tools. And a CybSafe/National Cybersecurity Alliance study found 38% of employees sharing confidential data with AI platforms without authorization.
The template's fifth example row demonstrates exactly this scenario. An autonomous agent called "OpenClaw" is discovered on three employee laptops. Nobody knows who installed it, what data it accesses, or what it sends externally. There's no vendor contract, no data processing agreement, no risk assessment, and no responsible owner. Status: Shadow / Unverified. Risk level: Critical.
That's not hypothetical. It's Tuesday.
The compliance exposure from Shadow AI is concrete. ISO 42001 Clause 4.3 requires defining the AIMS scope, which means you need to know what's in it. If unapproved AI systems operate outside your scope definition, your certification is based on incomplete information. The Colorado AI Act requires deployers of high-risk AI to maintain documented risk management policies under C.R.S. 6-1-1702. You can't manage risk for systems you don't know exist. And under IBM's 2025 Cost of a Data Breach analysis, organizations with high Shadow AI usage paid roughly $670,000 more per breach than those with low or no Shadow AI.
Already discovered Shadow AI? The Colorado AI Act requires deployers of high-risk AI to maintain documented governance of every system in operation (C.R.S. 6-1-1702). An inventory isn't a nice-to-have. Enforcement begins June 30, 2026.
Check your Colorado AI Act obligations
The 11-Column Schema — What to Track and Why
The free template uses an 11-column schema organized into three functional clusters: identification, classification, and accountability. Each column answers a specific governance question, not just a data-capture question. Here's what they are and why each one matters.
| Column | What It Captures | Governance Question It Answers |
|---|---|---|
| A: System ID | Unique identifier (SYS-001, SYS-002) | Can we reference this system unambiguously across documents? |
| B: System Name | Name of the AI system or tool | What is it called internally? |
| C: Vendor / Provider | Company providing the AI system | Who is the responsible third party? Do we have a contract and DPA? |
| D: Description / Purpose | What the system does and its business purpose | Why does this system exist and what decisions does it influence? |
| E: Deployment Status | In Development, Pilot, Production, Retired, Shadow / Unverified | Is this system currently affecting real users or data? |
| F: Risk Level | Low, Medium, High, Critical, Not Assessed | How urgently does this system need governance controls? |
| G: Data Categories | Types of data processed (PII, financial, health) | What data privacy obligations does this trigger? |
| H: Responsible Owner | Person or team accountable | Who signs off on governance decisions for this system? |
| I: Deployment Date | Date system went into production or pilot | How long has this system been operating without governance? |
| J: Next Review Date | Scheduled governance review date | When does someone formally re-evaluate this system? |
| K: Notes | Additional context, known issues, integration points | What else does the governance team need to know? |
The template comes pre-populated with five realistic example rows, including the OpenClaw Shadow AI scenario. Fifty blank rows follow with System IDs pre-assigned from SYS-006 to SYS-055. Drop-down validation on columns E and F ensures consistent data entry across the organization. The sheet is protected with a blank password, so headers and structure stay intact while all input cells remain editable.
No login. No email. Excel format, compatible with Excel 2016+, Google Sheets, and LibreOffice.
How to Run the Inventory — A Practical Process
The inventory isn't a one-afternoon project. It's a structured process with three distinct discovery channels, each of which catches a different category of AI system. Run all three. Skipping one leaves gaps that surface during audits.
Start with procurement and finance records
Your accounts payable system, SaaS subscription manager, and corporate credit card statements already contain most of what you need. Search for vendor names associated with AI services: OpenAI, Anthropic, Google Cloud AI, AWS Bedrock, Azure AI, HuggingFace, Cohere, and any vertical AI vendor in your industry. This channel catches everything with a contract and a payment trail. It won't catch free-tier tools or open-source deployments, but it's the fastest starting point and produces the most complete initial list for most organizations.
Survey department heads and engineering leads
Send a five-question email to every department head and technical lead. Keep it short: (1) What AI tools does your team currently use? (2) Which were formally approved by IT? (3) Which process company data? (4) Which influence decisions about customers or employees? (5) Are there any AI features embedded in tools you use daily (e.g., CRM lead scoring, email spam filtering with ML)? This channel catches tools adopted with managerial awareness but without central IT tracking. The embedded AI question in item five is critical because most teams don't think of their CRM's built-in AI as a separate system.
Scan for Shadow AI
This is the channel that catches what nobody wanted to tell you about. Endpoint detection: check installed applications on corporate devices for known AI tools (ChatGPT desktop app, Claude desktop, local LLM runners like LM Studio or Ollama). Network traffic: monitor outbound API calls to api.openai.com, api.anthropic.com, generativelanguage.googleapis.com, and similar endpoints. Browser extension audit: review installed extensions for AI assistants, writing tools, and code helpers. None of these scans require special AI-specific tooling. Your existing SIEM, EDR, or network monitoring stack can run these queries today.
Common Mistakes That Wreck Your Inventory
The first mistake is scoping too narrowly. Teams that only inventory customer-facing AI systems miss the internal tools that process employee data, generate financial forecasts, or assist with code reviews. Internal doesn't mean low-risk. A resume screening tool used internally makes consequential decisions about people's livelihoods. It's high-risk under the Colorado AI Act regardless of whether it faces an external customer.
Second mistake: treating deployment status as binary. "We use it" or "we don't" misses the spectrum. The template provides five options for a reason. A system in Pilot processes real data but might lack production-grade controls. A Retired system might still have data retention obligations. A Shadow / Unverified system needs immediate investigation. If you collapse these into a binary, you lose the granularity that drives governance decisions.
Third: marking everything as "Not Assessed" and never revisiting. That field is a temporary state, not a permanent classification. If 40 of your 50 systems still say "Not Assessed" six months later, you don't have a risk classification. You have a spreadsheet.
Fourth: making the inventory a one-time project. An inventory you update once a year isn't an inventory. It's a snapshot. AI systems get adopted, modified, and decommissioned continuously. The inventory needs a defined update cadence, ideally quarterly, with a trigger mechanism for new system onboarding. Column J (Next Review Date) exists specifically for this.
Fifth, and this one is piecemeal: forgetting third-party embedded AI. Your CRM's AI-powered lead scoring, your email security vendor's ML-based phishing detection, your cloud provider's anomaly detection, the AI features your HR platform quietly enabled in the last update. These are all AI systems. They all process your data. They all need to be in the inventory.
From Inventory to Governance — What Comes Next
The inventory answers one question: what AI systems do we have? That's necessary, but it's not sufficient. Once you know what exists, you need to determine what governance obligations each system triggers, which controls apply across which frameworks, and what evidence artifacts you need to produce.
The free template deliberately stops at the inventory boundary. It doesn't include framework mapping columns (ISO 42001 clauses, NIST AI RMF subcategories), risk scoring formulas, Colorado high-risk determination logic, or cross-framework crosswalk. Those capabilities are where the AI Compliance Toolkit (ACT) picks up.
| Capability | Free Inventory Template | ACT Tier 1 ($399) |
|---|---|---|
| AI system catalog with risk classification | Yes | Yes |
| Cross-framework crosswalk (ISO + NIST + Colorado) | No | Yes |
| Gap analysis with severity ratings | No | Yes |
| Risk register with pre-loaded AI risks | No | Yes |
| Maturity dashboard | No | Yes |
The inventory is step one. If you've completed it, you're already ahead of the majority of organizations that haven't started. The ISO 42001 implementation guide and NIST AI RMF playbook cover the next stages in detail.
Download the Free Template
The template is a single Excel workbook with four tabs: Welcome (product overview and how-to), AI System Inventory (the 11-column register with 5 examples and 50 blank rows), Disclaimer & Legal (9-section terms of use), and Support (FAQs, contact information, and links to Move78 resources).
Format: .xlsx (Excel). Compatibility: Microsoft Excel 2016 and later, Google Sheets, LibreOffice Calc. Protection: Sheet protected with a blank password. Headers and structure locked; all input cells unlocked and editable. Unprotect if you need to add columns or extend rows.
No login. No email. No strings. See all free downloads from Move78