Shadow AI Discovery Workbook
Find unmanaged AI use across teams, browser tools, vendors, plugins, agents, and informal workflows.
Open artifact page →A short screening tool for unmanaged AI tool usage across the enterprise. It shows whether procurement records, policy, visibility, and ownership are strong enough to keep hidden AI use from turning into a governance problem.
The tool surfaces the highest-value gaps first so the next step is obvious.
AI Controls Toolkit (ACT) Tier 1 gives you the cross-framework inventory, gap analysis, and risk register foundation this quick screen does not create. AI Controls Toolkit (ACT) Tier 2 adds the full acceptable use policy and implementation documents once the governance baseline is in place.
PDF generation runs locally in your browser. Your answers are not sent to Move78 to create the report.
A high score does not mean the organization is "bad at AI." It usually means AI adoption is moving faster than governance. Public tools are already in use, procurement records do not reflect reality, policy coverage is weak, and nobody can say with confidence what data has already passed through unmanaged systems.
Shadow AI usually starts with convenience, not malice. Teams paste drafts, meeting notes, code, contracts, or customer data into public tools because the approved workflow is slower or unclear. Once that behavior spreads, inventories, policies, and procurement records become partially fictional.
Run the broader 50-question screen across governance, agentic AI, documentation, and evidence readiness.
Compare AI Controls Toolkit (ACT) Tier 1 Starter and Tier 2 Professional before buying.
Read practical implementation guides on ISO 42001, NIST AI RMF, AIBOM, and agentic AI governance.
See the operational view on shadow AI, acceptable use, vendor diligence, and governance blind spots.
It screens how exposed your organization is to unsanctioned or weakly governed AI usage. The focus is operational visibility: inventory accuracy, approval workflow, SSO controls, policy coverage, data-handling restrictions, third-party onboarding discipline, and incident reporting readiness.
No. A lower score means the most obvious governance gaps are less severe. It does not prove your inventory is complete or that staff behavior matches policy. Shadow AI risk is often underreported until procurement, security, or compliance reviews catch it late.
Because most governance reporting collapses if the underlying AI inventory is incomplete. If teams are using unsanctioned chatbots, copilots, agents, or browser extensions, your risk register, vendor list, and approval records are already partially fictional.
Because past AI misuse, data leakage, or policy bypass usually indicates a structural control weakness rather than a one-off mistake. If incidents happened before and the operating model did not change, the exposure is still there.
No. The scoring runs in the browser only. Answers are not transmitted, synchronized, or stored by Move78. Once the page is refreshed or the browser closes, the run is gone.
If the assessment shows unmanaged use, the next useful artifact is not another policy paragraph. Start with a discovery workbook that helps teams find tools, owners, data exposure, and remediation priorities.
Find unmanaged AI use across teams, browser tools, vendors, plugins, agents, and informal workflows.
Open artifact page →Source and review note: This page was last reviewed on 6 May 2026 against the current Move78 public site baseline and relevant official or authoritative sources where laws, standards, frameworks, cybersecurity controls, product scope, pricing, support policy, or implementation guidance are discussed. It provides operational implementation guidance and product information only; it is not legal advice, tax advice, audit assurance, certification assurance, conformity-assessment advice, buyer-approval assurance, or security assurance. Validate legal, regulatory, contractual, tax, audit, and security decisions with qualified professionals.