Assess in under 5 minutes whether the organization can explain what models, datasets, tools, connectors, and suppliers sit inside its AI stack.
This screen is for teams using third-party models, APIs, datasets, open-source packages, agents, or MCP servers who need a governance answer before procurement or deployment sprawl outruns control.

This assessment classifies the current posture quickly, surface the biggest gaps, and surface governance gaps and recommend the appropriate implementation path.
This assessment evaluates whether the organization can inventory, trace, review, and re-vet the third-party models, tools, data sources, and connectors inside its AI stack.
It does not mean the stack is unusable. It means dependency and review debt is accumulating faster than the governance model can defend.
The missing value is vendor diligence, MCP governance, evidence discipline, and lifecycle ownership. That sits in AI Controls Professional.
This section highlights the key governance gaps the assessment identified and recommends appropriate next steps.
This assessment evaluates whether the organization can inventory, trace, review, and re-vet the third-party models, tools, data sources, and connectors inside its AI stack.
It does not mean the stack is unusable. It means dependency and review debt is accumulating faster than the governance model can defend.
The missing value is vendor diligence, MCP governance, evidence discipline, and lifecycle ownership. That sits in AI Controls Professional.
When the assessment reveals structural control gaps requiring policy, procedure, evidence, lifecycle discipline, or implementation ownership, AI Controls Professional provides the full implementation evidence pack.
Get the implementation documents, procedures, evidence assets, and governance pack this free screen intentionally does not generate.
Use the supplier-governance guide to strengthen review criteria before approval.
Read the related guide page: MCP Server Approval Gate.
Read the OWASP Top 10 Agentic AI guide to understand the underlying control themes and risk categories.
It checks whether your organization can explain what models, datasets, tools, connectors, dependencies, and external suppliers sit inside the AI stack and who governs them.
Use it if you rely on third-party models, APIs, packages, embeddings, datasets, MCP servers, connectors, or other external components that materially affect AI behavior and risk.
An AIBOM is an AI bill of materials: a structured view of the components, dependencies, suppliers, and supporting artifacts that make up an AI system and affect its risk and governance posture.
No. It does not create a bill of materials. The results indicate whether your current governance posture is mature enough to build, maintain, and defend one.
Because they extend what the system can reach, influence, and depend on. Even if the model is unchanged, a weak connector or MCP dependency can materially change risk, exposure, and ownership.
No. This tool runs entirely in your browser. Your selections are not stored, synced, exported, or transmitted by the page itself.
Source and review note: This page was last reviewed on 6 May 2026 against the current Move78 public site baseline and relevant official or authoritative sources where laws, standards, frameworks, cybersecurity controls, product scope, pricing, support policy, or implementation guidance are discussed. It provides operational implementation guidance and product information only; it is not legal advice, tax advice, audit assurance, certification assurance, conformity-assessment advice, buyer-approval assurance, or security assurance. Validate legal, regulatory, contractual, tax, audit, and security decisions with qualified professionals.