Agentic AI Incident Log + Shutdown Playbook
Log agent incidents, classify autonomy failures, document kill-switch decisions, and retain rollback evidence.
Open artifact page →A short production-readiness screen for AI agents. The results indicate whether the current design belongs in production, a tightly controlled pilot, or nowhere near a live environment.
Answer every question based on the design you actually plan to deploy, not the control model you hope to add later.
If any of these appear, the issue is structural, not cosmetic.
AI Controls Professional is the implementation layer behind this screen. It provides the agentic governance module, policy pack, vendor due diligence assets, impact assessment support, and the documentation needed when this free gate says your current design is not ready.
Most teams evaluate AI agents the wrong way. They focus on model quality, response quality, or a successful demo. That is not the decision you need to make before production. The real question is whether the agent is governable once it can call tools, write into systems, touch sensitive data, or chain actions over time. This gate is built around that operational question, not a marketing definition of agent readiness.
The scoring model intentionally concentrates on the control points that break first in live environments: action scope, tool access, data sensitivity, human override, kill switch ownership, usable logging, delegation discipline, structured testing, named ownership, and material business impact. Those are the places where a prototype becomes an operational risk. The tool treats them as first-order governance issues because production failures usually stem from weak control architecture, not from a lack of optimism.
It classifies your current design into one of four operational states and exposes the missing controls that matter most right now.
It does not generate a policy, matrix, risk register, or approval workflow. That remains paid ACT territory by design.
The numeric score runs from 0 to 120. Lower is better. But the numeric score is not the whole story. The gate also applies critical override logic when specific structural blockers are present. That means an agent can still land in a worse result state even if the raw score looks moderate. This is deliberate. Broad write access, no mandatory human approval, no tested kill switch, no usable audit trail, or material legal or customer impact are not issues you average away.
That approach is aligned with the practical logic behind risk-based governance. ISO/IEC 42001 expects organizations to integrate AI-specific risk assessment, impact assessment, internal audit, monitoring, and documented information into the management system. NIST AI RMF organizes the work across GOVERN, MAP, MEASURE, and MANAGE, while the Playbook stresses prioritization based on impact and likelihood. The 2025 NIST Cyber AI Profile draft also reinforces inventory, data provenance, threat prioritization, and incident-aware operations. This tool borrows that operating logic at a lighter diagnostic layer rather than pretending a quick screen can replace formal governance work.
The result is intentionally blunt. If the agent can take irreversible actions, has broad write access, touches sensitive data, or runs without human approval and usable logs, the right answer is not "we will monitor it closely." The right answer is to stop, redesign, and document the control model before rollout.
This gate is meant to sit on top of your existing guide architecture. Use the pages below to explain the control rationale behind the score and connect to the full implementation toolkit.
The broad operating model for bounded autonomy, ownership, escalation, and board-level oversight.
The threat surface that explains why weak agent design becomes a production security problem quickly.
The cleanest frame for bounded autonomy and defense-in-depth containment.
The accountability and lifecycle logic for governing agents beyond experimentation.
If the deployment gate flags autonomy, tool-use, override, or escalation risk, use the incident log and shutdown playbook to define evidence and response structure before rollout.
Log agent incidents, classify autonomy failures, document kill-switch decisions, and retain rollback evidence.
Open artifact page →It screens the minimum governance conditions for deploying an AI agent beyond sandbox. It checks action scope, tool access, data sensitivity, human override, kill switch, logging, delegation, testing, ownership, and business impact.
No. A lower score means obvious governance blockers are less severe. It does not provide safety assurance, compliance assurance, or production-fitness approval. It is a triage layer, not a formal approval process.
Because agentic systems can take actions with real operational consequences. If high-impact actions do not require mandatory human approval, the governance model is structurally weak even if other controls look mature.
Without prompt, tool, action, and outcome logs, you cannot investigate incidents, prove what happened, or demonstrate operational control to internal stakeholders, customers, or auditors.
No. Scoring runs entirely in the browser. The page does not save assessment answers, create an account, or send results to a backend.
Source and review note: This page was last reviewed on 6 May 2026 against the current Move78 public site baseline and relevant official or authoritative sources where laws, standards, frameworks, cybersecurity controls, product scope, pricing, support policy, or implementation guidance are discussed. It provides operational implementation guidance and product information only; it is not legal advice, tax advice, audit assurance, certification assurance, conformity-assessment advice, buyer-approval assurance, or security assurance. Validate legal, regulatory, contractual, tax, audit, and security decisions with qualified professionals.