Are we ready for AI governance?
Use when the team needs a maturity baseline across controls, owners, evidence, and decision discipline.
Go to readiness tools →Start with the governance question in front of you, run the smallest useful browser-based check, then convert the result into a guide, download, ACT evidence pack, or implementation sprint route.
Do not start by browsing every tool. Pick the question that matches the buyer, board, audit, vendor, or engineering pressure you are facing this week.
Use when the team needs a maturity baseline across controls, owners, evidence, and decision discipline.
Go to readiness tools →Use when unmanaged AI, browser tools, MCP servers, copilots, plugins, or informal workflows may be outside governance.
Go to visibility tools →Use when the team needs first-pass triage before counsel, audit, procurement, or a formal implementation project.
Go to regulatory triage →Use when AI features, suppliers, RAG systems, subprocessors, or model/tool supply chains need screening.
Go to vendor tools →Use when agents can call tools, use credentials, execute skills, write files, access MCP servers, or require a kill switch.
Go to agentic tools →Use when tool results need to become a decision record, board briefing, risk register, or 90-day evidence plan.
Go to result routing →Most teams should not jump straight to the most technical check. Start with maturity and visibility, then move to regulation, vendor exposure, agent autonomy, and evidence conversion.
Score control maturity, ownership, oversight, and evidence gaps before selecting a deeper path.
Find unmanaged AI tools, shadow MCP servers, agents, vendors, and informal workflows.
Screen consequential AI exposure, supplier gaps, data disclosure, AIBOM, and red-team readiness.
Check MCP access, credentials, OpenClaw skills, autonomy, prompt injection, identity, and kill-switch readiness.
Turn results into a workbook, decision record, board summary, ACT pack, or implementation sprint scope.
The tools are not the end product. Their job is to make the next step obvious without forcing every visitor into the same paid offer.
Best when the team needs vocabulary, initial records, and one or two lightweight artifacts.
View downloads →Best when the team needs editable starter controls, registers, and operating documents without a full platform.
View ACT-1 →Best when ISO 42001, NIST AI RMF, vendor evidence, board reporting, and agentic AI governance must connect.
View ACT-2 →Best when there is executive pressure, procurement review, regulator-facing exposure, or internal rollout urgency.
View sprint →Start here when the buyer, board, or internal owner asks whether AI governance is mature enough to defend decisions and retain evidence.
50-question self-assessment scoring AI governance maturity across 12 control domains. Covers ISO 42001, NIST AI RMF, and Colorado AI Act. Generates on-page RAG dashboard with per-domain scores.
Start assessment →Quick triage for consequential AI use cases. Checks human review, appeal readiness, disclosure practices, data sensitivity, and likely Colorado AI Act relevance.
Start check →Use these tools before policy writing or control mapping. A team cannot govern what it has not identified, assigned, or reviewed.
Diagnose unmanaged AI tool usage, weak policy coverage, confidential data exposure, ownership gaps, and visibility blind spots across the enterprise.
Start check →Identify unmanaged MCP servers, registry gaps, local or containerized deployments, weak auth patterns, poor logging, and offboarding blind spots.
Start check →Use these checks when MCP servers, tool registries, credentials, scopes, secrets, and revocation paths need accountable approval.
Structured approval decision for MCP server onboarding. Reviews maintainer trust, authorization model, tool scope, data boundary, logging, credential handling, and production readiness.
Start gate →Evaluate credential issuance, scope control, secret storage, rotation discipline, revocation readiness, and ownership accountability for MCP connections.
Start check →Use these tools when OpenClaw, NemoClaw, skills, local deployments, incident handling, and containment controls are becoming operational risks.
Comprehensive security posture check for OpenClaw deployments. Scores deployment exposure, identity hygiene, skill and MCP governance, logging, kill-switch readiness, and oversight.
Start assessment →Score policy stance, shadow discovery, inventory completeness, credential exposure, sandbox availability, logging, containment readiness, and executive visibility.
Start check →Structured approval decision for OpenClaw skill installations. Reviews provenance, sandbox testing, permission scope, rollback path, logging, and production fit.
Start gate →Evaluate disable path, credential revocation, isolation capability, evidence preservation, forensics, rollback, escalation, and board reporting readiness.
Start check →Use these checks when agents can take actions, call tools, use identity grants, access data, or operate with limited human intervention.
Deployment gate for agentic AI systems. Reviews autonomy level, tool access, human override, evaluation evidence, rollback readiness, monitoring, and operational owner assignment.
Start gate →Assess manual disable paths, escalation rules, owner authority, credential revocation, containment testing, evidence retention, and rogue-agent response readiness.
Start check →Review agent identity boundaries, delegated OAuth grants, privilege scope, approval discipline, revocation ability, monitoring, and owner accountability.
Start check →Use these tools when AI risk is moving through prompts, RAG stores, vectors, third-party models, AIBOM components, vendor claims, or red-team findings.
Evaluate prompt injection exposure, tool misuse, excessive agency, user-content handling, instruction hierarchy, and control readiness.
Start check →Assess RAG and vector-store exposure across source trust, embedding scope, sensitive data leakage, retrieval controls, logging, and disclosure practices.
Start check →Check model, dataset, tool, vendor, component, and dependency visibility needed for AI bill-of-materials evidence and supplier governance.
Start check →Assess red teaming scenario design, scope coverage, test environment maturity, vendor evaluation discipline, remediation tracking, and deployment pressure governance.
Start check →Quick vendor risk triage across transparency, subprocessor disclosure, data retention, security evidence, incident commitments, sensitive data exposure, and lock-in risk.
Start screen →Start with the AI Governance Readiness Assessment if you do not yet know the strongest gap. Use the shadow AI and vendor checks when unmanaged tools or suppliers are the concern. Use the MCP, OpenClaw, and agentic AI tools when autonomous agents, tool access, credentials, or kill-switch readiness are the immediate risk.
The tools are positioned as browser-based assessment flows with no login requirement. They are intended for local triage and planning. Do not enter secrets, regulated personal data, credentials, confidential customer data, or information that your organization has not approved for assessment use.
Use the result as a routing decision. Low-maturity or early discovery results should go to a related guide or free download. Teams that need editable starter controls should review ACT-1. Teams with cross-framework, vendor, board, agentic AI, or multi-jurisdiction exposure should review ACT-2 or the implementation sprint.
No. The tools provide operational triage and implementation planning support only. They do not provide legal advice, tax advice, audit assurance, certification assurance, conformity-assessment advice, buyer-approval assurance, or security assurance.
Move to ACT-1 when the team needs editable starter governance artifacts rather than another diagnostic result. Move to ACT-2 when the team needs cross-framework mapping, management evidence, vendor diligence, board reporting, agentic AI governance, or a reusable implementation evidence system.
Consultants and vCISOs can use the tools as structured discovery aids, but they should validate outputs against client context, contracts, applicable law, and professional standards. The tools should not be represented as audit evidence, legal opinion, certification advice, or proof of compliance.
Use the free tools for diagnosis. Use downloads for first evidence records. Use ACT-1 or ACT-2 when the work needs editable implementation artifacts, cross-framework mapping, and management-ready records.
View Free Downloads Compare ACT TiersSource and review note: This page was last reviewed on 6 May 2026 against the current Move78 public site baseline and relevant official or authoritative sources where laws, standards, frameworks, cybersecurity controls, product scope, pricing, support policy, or implementation guidance are discussed. It provides operational implementation guidance and product information only; it is not legal advice, tax advice, audit assurance, certification assurance, conformity-assessment advice, buyer-approval assurance, or security assurance. Validate legal, regulatory, contractual, tax, audit, and security decisions with qualified professionals.