A short screening tool for unmanaged AI tool usage across the enterprise. It shows whether procurement records, policy, visibility, and ownership are strong enough to keep hidden AI use from turning into a governance problem.
3–4 minutes9 scored questionsNo login
Checks inventory quality, unmanaged public-tool access, acceptable-use coverage, confidential-data exposure, usage visibility, ownership, and incident history.
Escalates exposure when structural blind spots exist even if the numeric score looks moderate.
Stops before any inventory workbook, policy draft, or remediation tracker so it complements the paid AI Compliance Toolkit instead of replacing it.
Informational only. Not legal advice. This tool does not determine compliance with any law, regulation, or standard.
Result state A
Low exposure
Final score
0
Out of 108
Critical triggers
0
Override conditions hit
Exposure status
Low exposure
Operational classification
Top 5 likely blind spots
The tool surfaces the highest-value gaps first so the next step is obvious.
Escalation warnings
Shadow AI becomes expensive when nobody can see it.
AI Compliance Toolkit (ACT) Tier 1 gives you the cross-framework inventory, gap analysis, and risk register foundation this quick screen does not create. AI Compliance Toolkit (ACT) Tier 2 adds the full acceptable use policy and implementation documents once the governance baseline is in place.
A high score does not mean the organization is “bad at AI.” It usually means AI adoption is moving faster than governance. Public tools are already in use, procurement records do not reflect reality, policy coverage is weak, and nobody can say with confidence what data has already passed through unmanaged systems.
Where the exposure usually comes from
Shadow AI usually starts with convenience, not malice. Teams paste drafts, meeting notes, code, contracts, or customer data into public tools because the approved workflow is slower or unclear. Once that behavior spreads, inventories, policies, and procurement records become partially fictional.
It screens how exposed your organization is to unsanctioned or weakly governed AI usage. The focus is operational visibility: inventory accuracy, approval workflow, SSO controls, policy coverage, data-handling restrictions, third-party onboarding discipline, and incident reporting readiness.
Does a lower score mean there is no real problem?
No. A lower score means the most obvious governance gaps are less severe. It does not prove your inventory is complete or that staff behavior matches policy. Shadow AI risk is often underreported until procurement, security, or compliance reviews catch it late.
Why does inventory accuracy matter so much?
Because most governance reporting collapses if the underlying AI inventory is incomplete. If teams are using unsanctioned chatbots, copilots, agents, or browser extensions, your risk register, vendor list, and approval records are already partially fictional.
Why is incident history treated as a major signal?
Because past AI misuse, data leakage, or policy bypass usually indicates a structural control weakness rather than a one-off mistake. If incidents happened before and the operating model did not change, the exposure is still there.
Does this tool store anything I enter?
No. The scoring runs in the browser only. Answers are not transmitted, synchronized, or stored by Move78. Once the page is refreshed or the browser closes, the run is gone.
Informational only. Not legal advice. This tool does not determine compliance with any law, regulation, or standard and does not replace internal security, privacy, legal, or compliance review.
We use analytics cookies to improve your experience. No personal data is collected by our tools.