Browser-based governance checks for shadow AI, MCP servers, OpenClaw agents, agentic AI deployments, and regulatory readiness. Every tool runs in your browser. No login. No data collection. No answers leave your device.
18 tools · All free · All browser-only50-question self-assessment scoring AI governance maturity across 12 control domains. Covers ISO 42001, NIST AI RMF, and Colorado AI Act. Generates on-page RAG dashboard with per-domain scores.
Start assessment →Quick triage for consequential AI use cases. Checks human review, appeal readiness, disclosure practices, data sensitivity, and likely Colorado AI Act relevance.
Start check →Diagnose unmanaged AI tool usage, weak policy coverage, confidential data exposure, ownership gaps, and visibility blind spots across the enterprise.
Start check →Identify unmanaged MCP servers, registry gaps, local or containerized deployments, weak auth patterns, poor logging, and offboarding blind spots.
Start check →Structured approval decision for MCP server onboarding. Reviews maintainer trust, authorization model, tool scope, data boundary, logging, credential handling, and production readiness.
Start gate →Evaluate credential issuance, scope control, secret storage, rotation discipline, revocation readiness, and ownership accountability for MCP connections.
Start check →Comprehensive security posture check for OpenClaw deployments. Scores deployment exposure, identity hygiene, skill and MCP governance, logging, kill-switch readiness, and oversight.
Start assessment →Score policy stance, shadow discovery, inventory completeness, credential exposure, sandbox availability, logging, containment readiness, and executive visibility.
Start check →Structured approval decision for OpenClaw skill installations. Reviews provenance, sandbox testing, permission scope, rollback path, logging, and production fit.
Start gate →Evaluate disable path, credential revocation, isolation capability, evidence preservation, forensics, rollback, escalation, and board reporting readiness.
Start check →Score human override controls, tool access boundaries, logging depth, kill switch availability, delegation patterns, and production readiness for AI agent deployments.
Start gate →Review shutdown controls, credential revocation capability, network isolation, evidence retention, escalation procedures, and board-ready containment reporting.
Start check →Review shadow agent visibility, OAuth grant ownership, scope control, revocation readiness, and attribution discipline for AI agent identity patterns.
Start check →Review autonomy boundaries, tool permission controls, approval workflows, kill switch readiness, logging depth, and high-impact oversight for prompt injection and excessive agency risk.
Start check →Assess source trust, retrieval access controls, data leakage exposure, takedown readiness, traceability, and disclosure risk in RAG and vector database pipelines.
Start check →Review AI inventory completeness, provenance tracking, vendor diligence, update control, traceability, and AI bill of materials readiness across the supply chain.
Start check →Assess red teaming scenario design, scope coverage, test environment maturity, vendor evaluation discipline, remediation tracking, and deployment pressure governance.
Start check →Quick vendor risk triage across transparency, subprocessor disclosure, data retention, security evidence, incident commitments, sensitive data exposure, and lock-in risk.
Start screen →These free tools diagnose governance exposure across individual risk areas. The AI Controls Toolkit (ACT) provides the unified cross-framework implementation system that turns those findings into documented compliance.
Start Free Assessment Compare Toolkit Tiers