Singapore’s announcement of the Model AI Governance Framework for Agentic AI marks a pivotal step in establishing accountable oversight for autonomous systems. By explicitly addressing risks such as unauthorised actions, data misuse and systemic disruptions, organisations can apply best-in-class principles to enterprise identity governance and AI oversight.
Securing autonomous AI begins with identity-first, outcome-driven controls. The framework underscores this approach: assigning each AI agent a verifiable identity, enforcing task-specific, time-bound permissions and ensuring human accountability at every stage. These measures reflect the standards necessary for safely deploying AI at scale, where visibility, control and auditability are non-negotiable.
Modern Privileged Access Management (PAM) platforms built on zero trust principles are well suited to autonomous systems because they eliminate implicit trust and continuously validate identity, context and intent at every step.
Continuous monitoring and outcome-based constraints enable organisations to detect deviations, prevent privilege escalation and maintain trust in autonomous operations. Aligning technical controls with human oversight ensures AI agents operate securely without slowing legitimate workflows, removing friction while enabling innovation.
Singapore’s principles, including granular identity, bounded access, traceability, and auditable decision-making, are more than compliance requirements. They set the benchmark for responsibly managing autonomous systems, protecting sensitive data and maintaining operational resilience, which other countries in the APAC region can emulate.
Lifecycle-based technical controls spanning development, testing, deployment and continuous monitoring reinforce the need for visibility and enforcement in environments where AI agents operate at machine speed. Embedding security from the outset ensures organisations can harness AI’s capabilities while maintaining trust, control, and compliance.

