Colorado AI Act in · EU AI Act (High-Risk) in · ISO 42001 + NIST AI RMF + OpenClaw + Agentic AI — organized into editable implementation artifacts

Free AI Governance Tools

Start with the governance question in front of you, run the smallest useful browser-based check, then convert the result into a guide, download, ACT evidence pack, or implementation sprint route.

18 tools No login Browser-side assessment flow Operational triage, not legal advice
Browser-only processing No account required Use for first-pass triage
Start here

Choose the AI governance question you need to answer

Do not start by browsing every tool. Pick the question that matches the buyer, board, audit, vendor, or engineering pressure you are facing this week.

Are we ready for AI governance?

Use when the team needs a maturity baseline across controls, owners, evidence, and decision discipline.

Go to readiness tools →

Do we know where AI is being used?

Use when unmanaged AI, browser tools, MCP servers, copilots, plugins, or informal workflows may be outside governance.

Go to visibility tools →

Which regulation or framework might apply?

Use when the team needs first-pass triage before counsel, audit, procurement, or a formal implementation project.

Go to regulatory triage →

Are vendors creating hidden AI risk?

Use when AI features, suppliers, RAG systems, subprocessors, or model/tool supply chains need screening.

Go to vendor tools →

Are agents or OpenClaw creating runtime risk?

Use when agents can call tools, use credentials, execute skills, write files, access MCP servers, or require a kill switch.

Go to agentic tools →

What should management see?

Use when tool results need to become a decision record, board briefing, risk register, or 90-day evidence plan.

Go to result routing →
Recommended sequence

Run the tools in the order evidence normally forms

Most teams should not jump straight to the most technical check. Start with maturity and visibility, then move to regulation, vendor exposure, agent autonomy, and evidence conversion.

1

Baseline readiness

Score control maturity, ownership, oversight, and evidence gaps before selecting a deeper path.

2

Inventory and visibility

Find unmanaged AI tools, shadow MCP servers, agents, vendors, and informal workflows.

3

Regulatory and vendor triage

Screen consequential AI exposure, supplier gaps, data disclosure, AIBOM, and red-team readiness.

4

Agentic runtime controls

Check MCP access, credentials, OpenClaw skills, autonomy, prompt injection, identity, and kill-switch readiness.

5

Evidence route

Turn results into a workbook, decision record, board summary, ACT pack, or implementation sprint scope.

After the result

Use the result to choose the next evidence path

The tools are not the end product. Their job is to make the next step obvious without forcing every visitor into the same paid offer.

Low maturity

Use guides and free downloads

Best when the team needs vocabulary, initial records, and one or two lightweight artifacts.

View downloads →
Starter implementation

Review ACT-1 Starter

Best when the team needs editable starter controls, registers, and operating documents without a full platform.

View ACT-1 →
Cross-framework exposure

Review ACT-2 Professional

Best when ISO 42001, NIST AI RMF, vendor evidence, board reporting, and agentic AI governance must connect.

View ACT-2 →
Urgent or complex

Scope an implementation sprint

Best when there is executive pressure, procurement review, regulator-facing exposure, or internal rollout urgency.

View sprint →

Readiness and regulatory triage (2 tools)

Start here when the buyer, board, or internal owner asks whether AI governance is mature enough to defend decisions and retain evidence.

Governance

AI Governance Readiness Assessment

50-question self-assessment scoring AI governance maturity across 12 control domains. Covers ISO 42001, NIST AI RMF, and Colorado AI Act. Generates on-page RAG dashboard with per-domain scores.

⏲ ~15 min☑ 50 questions
Best next stepUse ACT-1 for starter controls or ACT-2 if multiple frameworks and executive evidence are in scope.
Start assessment →
Trigger

Consequential AI Trigger Check

Quick triage for consequential AI use cases. Checks human review, appeal readiness, disclosure practices, data sensitivity, and likely Colorado AI Act relevance.

⏲ ~4 min☑ Scored
Best next stepUse the FRIA Lite and Colorado forms download if the result indicates impact-assessment or consumer-rights evidence is needed.
Start check →
👁

Inventory, shadow AI, and MCP visibility (2 tools)

Use these tools before policy writing or control mapping. A team cannot govern what it has not identified, assigned, or reviewed.

Discovery

Shadow AI Exposure Check

Diagnose unmanaged AI tool usage, weak policy coverage, confidential data exposure, ownership gaps, and visibility blind spots across the enterprise.

⏲ ~4 min☑ 9 questions
Best next stepUse the Shadow AI Discovery Workbook or AI system inventory guide to create the first evidence register.
Start check →
MCP

Shadow MCP Exposure Check

Identify unmanaged MCP servers, registry gaps, local or containerized deployments, weak auth patterns, poor logging, and offboarding blind spots.

⏲ ~4 min☑ Scored
Best next stepUse ACT-2 when MCP inventory, tool approval, credentials, and agentic governance need to be managed together.
Start check →
🔒

MCP approval and credential governance (2 tools)

Use these checks when MCP servers, tool registries, credentials, scopes, secrets, and revocation paths need accountable approval.

MCP

MCP Server Approval Gate

Structured approval decision for MCP server onboarding. Reviews maintainer trust, authorization model, tool scope, data boundary, logging, credential handling, and production readiness.

⏲ ~4 min☑ 12 questions
Best next stepUse the result as an approval record and route repeated MCP use cases into ACT-2 controls.
Start gate →
Credentials

MCP Credential & Scope Governance Check

Evaluate credential issuance, scope control, secret storage, rotation discipline, revocation readiness, and ownership accountability for MCP connections.

⏲ ~4 min☑ Scored
Best next stepUse ACT-2 when credential governance, runtime boundary evidence, and kill-switch readiness must be retained.
Start check →
🤖

OpenClaw deployment governance (4 tools)

Use these tools when OpenClaw, NemoClaw, skills, local deployments, incident handling, and containment controls are becoming operational risks.

OpenClaw

OpenClaw Security Readiness Assessment

Comprehensive security posture check for OpenClaw deployments. Scores deployment exposure, identity hygiene, skill and MCP governance, logging, kill-switch readiness, and oversight.

⏲ ~5 min☑ Scored
Best next stepRoute hardening gaps to M78Armor and governance evidence gaps to ACT-2.
Start assessment →
OpenClaw

OpenClaw Shadow Deployment Governance Check

Score policy stance, shadow discovery, inventory completeness, credential exposure, sandbox availability, logging, containment readiness, and executive visibility.

⏲ ~5 min☑ Scored
Best next stepUse the result to create an OpenClaw inventory and escalation record before production expansion.
Start check →
Skills

OpenClaw Skill Approval Gate

Structured approval decision for OpenClaw skill installations. Reviews provenance, sandbox testing, permission scope, rollback path, logging, and production fit.

⏲ ~4 min☑ Scored
Best next stepUse the decision as a lightweight approval record and link repeated skill approvals into ACT-2 governance workflows.
Start gate →
Incident

OpenClaw Incident Containment Readiness Check

Evaluate disable path, credential revocation, isolation capability, evidence preservation, forensics, rollback, escalation, and board reporting readiness.

⏲ ~5 min☑ Scored
Best next stepUse the Agentic AI Incident Log and Shutdown Playbook to document containment and rollback evidence.
Start check →

Agentic AI runtime governance (3 tools)

Use these checks when agents can take actions, call tools, use identity grants, access data, or operate with limited human intervention.

Agentic AI

Agentic AI Deployment Gate

Deployment gate for agentic AI systems. Reviews autonomy level, tool access, human override, evaluation evidence, rollback readiness, monitoring, and operational owner assignment.

⏲ ~5 min☑ Scored
Best next stepUse ACT-2 when deployment approvals need to connect to agent inventory, oversight, incidents, and management reporting.
Start gate →
Runtime

Kill Switch & Rogue Agent Readiness Check

Assess manual disable paths, escalation rules, owner authority, credential revocation, containment testing, evidence retention, and rogue-agent response readiness.

⏲ ~5 min☑ Scored
Best next stepUse the Agentic AI Incident Log and Shutdown Playbook if containment decisions need evidence records.
Start check →
Identity

AI Agent Identity & OAuth Grant Exposure Check

Review agent identity boundaries, delegated OAuth grants, privilege scope, approval discipline, revocation ability, monitoring, and owner accountability.

⏲ ~4 min☑ Scored
Best next stepRoute privilege and identity weaknesses into ACT-2 runtime boundary and vendor evidence records.
Start check →
🛡

AI security, vendor, data, and supply-chain risk (5 tools)

Use these tools when AI risk is moving through prompts, RAG stores, vectors, third-party models, AIBOM components, vendor claims, or red-team findings.

Security

Prompt Injection & Excessive Agency Governance Check

Evaluate prompt injection exposure, tool misuse, excessive agency, user-content handling, instruction hierarchy, and control readiness.

⏲ ~5 min☑ Scored
Best next stepUse ACT-2 when prompt injection controls must be documented across governance, security, and incident response.
Start check →
Data

RAG / Vector Trust & Data Disclosure Check

Assess RAG and vector-store exposure across source trust, embedding scope, sensitive data leakage, retrieval controls, logging, and disclosure practices.

⏲ ~5 min☑ Scored
Best next stepUse the result to create data-boundary records and vendor evidence requests before expanding RAG use.
Start check →
AIBOM

AI Supply Chain / AIBOM Readiness Check

Check model, dataset, tool, vendor, component, and dependency visibility needed for AI bill-of-materials evidence and supplier governance.

⏲ ~5 min☑ Scored
Best next stepUse the AIBOM guide and ACT-2 if supplier traceability must connect to inventory and risk records.
Start check →
Security

AI Red Teaming & Vendor Evaluation Gate

Assess red teaming scenario design, scope coverage, test environment maturity, vendor evaluation discipline, remediation tracking, and deployment pressure governance.

⏲ ~5 min☑ Scored
Best next stepUse ACT-2 if findings need remediation tracking, supplier evidence, and management reporting.
Start check →
Vendor

AI Vendor Pre-Screen Lite

Quick vendor risk triage across transparency, subprocessor disclosure, data retention, security evidence, incident commitments, sensitive data exposure, and lock-in risk.

⏲ ~4 min☑ 7 questions
Best next stepUse the AI Vendor Due Diligence Pack when suppliers need consistent approval evidence.
Start screen →
FAQ

Questions before using the tools

Which Move78 free AI governance tool should I start with?

Start with the AI Governance Readiness Assessment if you do not yet know the strongest gap. Use the shadow AI and vendor checks when unmanaged tools or suppliers are the concern. Use the MCP, OpenClaw, and agentic AI tools when autonomous agents, tool access, credentials, or kill-switch readiness are the immediate risk.

Do the tools store my answers?

The tools are positioned as browser-based assessment flows with no login requirement. They are intended for local triage and planning. Do not enter secrets, regulated personal data, credentials, confidential customer data, or information that your organization has not approved for assessment use.

What should I do after running a tool?

Use the result as a routing decision. Low-maturity or early discovery results should go to a related guide or free download. Teams that need editable starter controls should review ACT-1. Teams with cross-framework, vendor, board, agentic AI, or multi-jurisdiction exposure should review ACT-2 or the implementation sprint.

Are these tools legal, audit, or security advice?

No. The tools provide operational triage and implementation planning support only. They do not provide legal advice, tax advice, audit assurance, certification assurance, conformity-assessment advice, buyer-approval assurance, or security assurance.

When should a team move from free tools to ACT-1 or ACT-2?

Move to ACT-1 when the team needs editable starter governance artifacts rather than another diagnostic result. Move to ACT-2 when the team needs cross-framework mapping, management evidence, vendor diligence, board reporting, agentic AI governance, or a reusable implementation evidence system.

Can consultants or vCISOs use these tools with clients?

Consultants and vCISOs can use the tools as structured discovery aids, but they should validate outputs against client context, contracts, applicable law, and professional standards. The tools should not be represented as audit evidence, legal opinion, certification advice, or proof of compliance.

Tools show the gap. Evidence records close it.

Use the free tools for diagnosis. Use downloads for first evidence records. Use ACT-1 or ACT-2 when the work needs editable implementation artifacts, cross-framework mapping, and management-ready records.

View Free Downloads Compare ACT Tiers

Source and review note: This page was last reviewed on 6 May 2026 against the current Move78 public site baseline and relevant official or authoritative sources where laws, standards, frameworks, cybersecurity controls, product scope, pricing, support policy, or implementation guidance are discussed. It provides operational implementation guidance and product information only; it is not legal advice, tax advice, audit assurance, certification assurance, conformity-assessment advice, buyer-approval assurance, or security assurance. Validate legal, regulatory, contractual, tax, audit, and security decisions with qualified professionals.