Colorado AI Act in · EU AI Act (High-Risk) in · ISO 42001 + NIST AI RMF + Agentic AI — unified in one toolkit

Singapore IMDA Agentic AI Governance Framework: Implementation Guide (2026)

The world's first government-issued governance framework specifically built for autonomous AI agents. Four dimensions. One implementation roadmap.

By Abhishek G Sharma Published March 21, 2026 11 min read
Four governance pillars protecting an autonomous AI agent
IMDA's four-dimension governance model for agentic AI systems

Why Singapore Built a Purpose-Built Agentic Framework

Singapore's Infocomm Media Development Authority (IMDA) released the Model AI Governance Framework for Agentic AI (MGF) on January 22, 2026 at the World Economic Forum in Davos. It isn't the first AI governance framework Singapore has produced — earlier versions covered traditional AI (2019), an update (2020), and generative AI (2024). But this one addresses a fundamentally different problem.

Existing frameworks like ISO 42001 and NIST AI RMF were designed for systems that receive a prompt and return a response. Agentic AI systems don't work that way. They plan multi-step actions, invoke external tools, delegate to sub-agents, and take real-world actions with consequences that can't always be undone. The governance assumptions underpinning traditional frameworks — a human reviews every output, the system operates within a single session, actions are reversible — break down when agents operate autonomously.

IMDA's framework is voluntary and non-binding. It carries no statutory penalties in Singapore or anywhere else. That said, it's rapidly becoming a global reference baseline. Multinational organizations are citing it in board-level risk documentation, and regulatory bodies across Asia-Pacific, the EU, and North America are referencing its four-dimension model in their own agentic AI guidance.

Singapore's approach builds on a decade of practical AI governance work. The 2019 Model Framework was one of the first national-level AI governance documents globally. The 2020 update added sector-specific implementation guidance. The 2024 generative AI supplement addressed foundation models and prompt-based systems. The 2026 MGF for Agentic AI represents the logical next step: governance for systems that act, not just respond.

The MGF doesn't replace ISO 42001 or NIST AI RMF — it supplements them with agentic-specific governance layers that existing standards don't cover.
See how all four frameworks compare in the complete agentic governance guide.
Agentic AI
AI systems that autonomously plan, execute multi-step tasks, invoke tools, and take actions with real-world consequences.
MGF
Model AI Governance Framework — IMDA's voluntary governance guidance for AI systems in Singapore.
Meaningful Human Accountability
IMDA's term for requiring an identifiable human responsible for each agent's outcomes — broader than real-time oversight.
Operational Bounds
Explicitly defined limits on what an agent is authorized to do, covering tool access, data scope, financial thresholds, and decision types.

Dimension 1: Assess and Bound Risks Upfront

Before any agent goes into production, the organization must scope what it's allowed to do and identify what could go wrong. This isn't a generic risk assessment — it's agent-specific, covering the agent's autonomy level, tool access, data reach, and potential for cascading actions.

Most organizations deploying agents in 2026 skip this step entirely. They treat agent deployment like deploying a chatbot — configure it, test it briefly, push it to production. IMDA's framework makes the case that agents require pre-deployment risk scoping at a level of specificity that traditional AI governance doesn't demand. An agent that can invoke APIs, access databases, and delegate to sub-agents introduces failure modes that a static prediction model simply doesn't have.

Key Requirements

Evidence required: Agent inventory register, per-agent risk assessment, operational bounds specification, proportionality justification.

Dimension 2: Make Humans Meaningfully Accountable

"Meaningful human accountability" is the phrase IMDA uses deliberately. It's not the same as human-in-the-loop oversight (which implies real-time approval of every action). It's broader: someone specific must be accountable for every agent's behavior, decisions, and outcomes — and that accountability can't be delegated to the agent itself.

This distinction matters. Traditional HITL governance assumes a human reviews each decision before the system acts. That model doesn't scale to agents that make dozens of decisions per minute across multiple tool invocations. IMDA's approach is more realistic: you don't need a human approving every action, but you absolutely need a human who will answer for the consequences. The accountability structure exists whether or not the human was actively monitoring at the moment something went wrong.

In my experience reviewing ISO 42001 implementations, organizations frequently confuse "someone is watching" with "someone is accountable." Watching dashboards is monitoring. Accountability means you've documented who made the deployment decision, who set the operational bounds, who approved the tool access list, and who gets called at 2 AM when the agent does something unexpected.

Key Requirements

Evidence required: RACI matrix per agent, escalation threshold specification, override procedure documentation, redress contact and process.

Dimension 3: Implement Technical Controls and Processes

Governance documentation without technical enforcement is paperwork, not protection. Dimension 3 covers the full agent lifecycle — from design through post-deployment monitoring — with specific controls at each stage. This is where IMDA's framework most closely aligns with ISO 42001 Annex A.6, but it adds agentic-specific requirements that the standard doesn't address.

The critical insight in Dimension 3: agentic systems require controls at four distinct lifecycle stages, and organizations typically only implement post-deployment monitoring — skipping the design, development, and pre-deployment gates that prevent problems rather than detect them after the fact.

Lifecycle Controls

Evidence required: Design documentation, code review records, red-team reports, monitoring dashboards, incident logs, re-assessment records.

Dimension 4: Enable End-User Responsibility

End users — whether customers, employees, or business partners interacting with agents — must know they're dealing with an autonomous system, understand what it can and can't do, and have a clear way to escalate to a human when needed.

Dimension 4 is where the IMDA framework intersects with the EU AI Act's transparency requirements under Article 50. While the EU mandates disclosure for certain AI systems, IMDA takes a broader approach: all agent interactions should include disclosure, regardless of risk classification. The reasoning is straightforward — users can't make informed decisions about reliance and trust if they don't know an agent is making or executing decisions on their behalf.

Organizations often treat end-user disclosure as a legal checkbox. IMDA frames it differently: disclosure isn't just about compliance, it's about enabling users to exercise appropriate judgment about when to trust agent outputs and when to override or escalate.

Key Requirements

Evidence required: User disclosure materials, training completion records, escalation contact list, feedback form and triage logs.

Implementation Checklist

The table below maps each dimension to its key requirements, the governance artifacts you'll need to produce, and who in the organization is typically responsible.

DimensionKey RequirementGovernance ArtifactResponsible
1. Assess & Bound RisksCatalogue all agentsAgent inventory registerAI Governance Lead
Map failure modes per agentAgent risk assessmentRisk Manager
Define operational boundsBounds specification per agentProduct Owner + Risk
Justify proportionalityProportionality assessmentAI Governance Lead
2. Human AccountabilityAssign named accountabilityRACI matrix per agentCISO / DPO
Set escalation thresholdsEscalation threshold specProduct Owner
Enable human overrideOverride procedure + technical kill switchEngineering Lead
Establish redress pathRedress procedure for affected partiesLegal / Compliance
3. Technical ControlsDesign-stage controlsAgent design doc + tool whitelistEngineering Lead
Pre-deployment testingRed-team report + go/no-go recordSecurity Team
Post-deployment monitoringMonitoring dashboard + alert rulesDevOps / SRE
Periodic re-assessmentQuarterly risk review recordsAI Governance Lead
4. End-User ResponsibilityDisclose agent statusUser disclosure materialsProduct Owner
Train internal usersTraining program + completion logHR / L&D
Provide feedback channelFeedback form + triage procedureCustomer Support

Mapping IMDA to ISO 42001 and NIST AI RMF

The IMDA framework doesn't exist in isolation. Organizations already implementing ISO 42001 or following NIST AI RMF can map IMDA's four dimensions onto their existing control structures. The table below shows where each dimension aligns — and where IMDA extends beyond what the standards cover.

IMDA DimensionKey RequirementsISO 42001NIST AI RMFImplementation Actions
1. Assess & Bound RisksAgent inventory, risk identification, operational bounds, proportionalityClause 6.1, A.4.3, A.5MAP 1.1, MAP 2.2, MAP 3.1Create agent register; document bounds per agent; justify risk-control proportionality
2. Human AccountabilityNamed accountability, escalation thresholds, override capability, redressClause 5.3, A.9.2GV.1.3, GV.2.1Build RACI matrix; define measurable escalation triggers; test override mechanisms
3. Technical ControlsDesign-stage controls, pre-deployment testing, monitoring, re-assessmentA.6 (full lifecycle)MEASURE 2.2, MEASURE 3.3, MANAGE 2.1Implement tool whitelists; run agentic red-teams; deploy anomaly monitoring
4. End-User ResponsibilityDisclosure, training, escalation access, feedbackA.8 (communication), A.9.1GV.4.2, MAP 5.1Draft disclosure notices; build training module; create feedback triage process
ISO 42001 provides partial coverage of all four dimensions but wasn't designed for agents. IMDA extends it with agentic-specific requirements — especially around operational bounds, meaningful accountability, and lifecycle red-teaming.
See the full ISO 42001 implementation guide for base-layer controls.

IMDA vs UC Berkeley vs NIST AI RMF

Three frameworks. Different origins, different scopes, overlapping territory. The table below compares them across seven dimensions to help you decide which to implement — or, more realistically, how to layer them together.

DimensionIMDA MGFUC Berkeley CLTCNIST AI RMF
Primary FocusOperational governanceTechnical risk managementRisk management framework
Agentic-SpecificYes (purpose-built)Yes (extends NIST)No (general AI)
CertificationNo certification pathNo certification pathNo (voluntary profile)
Key ConceptMeaningful human accountabilityBounded autonomy + defense-in-depthGovern/Map/Measure/Manage
Multi-AgentLimited coverageAddresses delegation chainsNot addressed
Human OversightAccountability-based (not just real-time)Autonomy spectrum (L0–L5)Human-in-the-loop
Implementation DetailModerate (artifact-level)Low (academic, conceptual)High (subcategory-level)

The practical recommendation for SMEs: implement NIST AI RMF as the base layer, use IMDA's four dimensions for agentic governance structure, and apply UC Berkeley's bounded autonomy and defense-in-depth as technical design principles. No single framework is sufficient on its own — but layering all three gives you coverage across governance, risk management, and technical controls that no individual framework provides. Organizations already certified to ISO 42001 have a head start, since about 60% of IMDA's requirements map to existing Annex A controls.

Five-step implementation progression for IMDA framework
Five-step IMDA framework implementation for SMEs

Practical Implementation Steps for SMEs

IMDA's framework reads well as policy. Turning it into operational practice requires a concrete five-step process. Most SMEs deploying between 3 and 10 agents can work through this in 4 to 6 weeks if they assign a dedicated governance owner.

Step-by-Step

Step 1: Inventory all agents and classify autonomy levels. You can't govern what you can't see. Catalogue every AI agent in the organization — including agents embedded in third-party SaaS tools that your teams may have activated without IT approval. Classify each by autonomy level (manual, semi-autonomous, fully autonomous) and by risk tier (low, medium, high). The inventory becomes the foundation for every subsequent governance decision.

Step 2: Define operational bounds per agent. For each agent, document what it's authorized to do, what tools and data it can access, what financial limits apply, and what decision types require human approval. Bounds should be specific enough to test and audit — "the agent handles customer queries" isn't a bound; "the agent can access the FAQ database (read-only), respond to Tier 1 support tickets, and escalate any refund request over $100 to a human" is.

Step 3: Assign accountable humans for every agent. Build a RACI matrix that names specific individuals (not departments) responsible for each agent's behavior. "The system is accountable" is not acceptable under this framework — someone with a name and a job title must own it. For each agent, identify who approved deployment, who monitors ongoing behavior, who gets called for escalations, and who answers to regulators.

Step 4: Implement technical controls. Translate governance policies into enforceable technical controls — tool whitelists with least-privilege access, API rate limits, comprehensive logging of all agent actions (including intermediate reasoning steps), anomaly detection thresholds, and circuit breakers that halt execution when bounds are breached. Policy without enforcement is hope, not governance.

Step 5: Document everything and train staff. Governance that exists only in someone's head isn't governance. Document agent specifications, risk assessments, accountability assignments, monitoring configurations, and escalation procedures. Train staff who interact with agents on appropriate reliance, override procedures, and escalation contacts. Keep documentation in a central repository that auditors can access without chasing people.

Common Mistakes to Avoid

Organizations implementing IMDA's framework consistently make the same errors. Seven of the most common:

The AI Controls Toolkit (ACT) Tier 2 includes pre-built IMDA-aligned policy templates, RACI matrices, and implementation checklists for all four dimensions.
Compare toolkit tiers to find the right level for your organization.

Frequently Asked Questions

When did Singapore IMDA release the Agentic AI Governance Framework?

Singapore's Infocomm Media Development Authority (IMDA) released the Model AI Governance Framework for Agentic AI on January 22, 2026 at the World Economic Forum in Davos. It is voluntary, non-binding guidance that builds on Singapore's earlier AI governance frameworks from 2019, 2020, and 2024. It is the first government-issued framework specifically designed for autonomous AI agents.

What are the four dimensions of the IMDA MGF for Agentic AI?

The four dimensions are: (1) Assessing and bounding the risks upfront, (2) Making humans meaningfully accountable, (3) Implementing technical controls and processes, and (4) Enabling end-user responsibility. Each dimension addresses a distinct governance layer, from pre-deployment risk scoping through post-deployment user transparency.

Is the IMDA framework mandatory or voluntary?

The IMDA Model AI Governance Framework for Agentic AI is voluntary, non-binding guidance. It carries no statutory penalties. However, it is rapidly becoming a global reference baseline — regulators and industry groups in the EU, US, and Asia-Pacific are citing it as a benchmark for responsible agentic AI deployment.

How does the IMDA framework differ from UC Berkeley's agentic AI profile?

The IMDA MGF focuses on operational governance — who is accountable, what processes exist, how users are informed. UC Berkeley's CLTC profile focuses on technical risk management — autonomy spectrum classification, bounded autonomy design, and defense-in-depth containment. They complement each other: IMDA tells you what governance to build, Berkeley tells you what technical controls to implement.

Does the IMDA framework apply to companies outside Singapore?

Yes, practically speaking. Although IMDA's framework carries no legal force outside Singapore, it is increasingly adopted as a voluntary governance baseline by multinational organizations. Companies that demonstrate alignment with IMDA's four dimensions position themselves favorably for regulatory engagement across multiple jurisdictions.

Assess Your AI Governance Readiness

The free assessment covers governance ownership, AI inventory, risk workflow, policy baseline, and evidence readiness across all frameworks. Estimated completion: ~15 minutes. No login.