Defining Agentic AI: A New Framework for Governance
- •OECD releases comprehensive framework to standardize definitions of AI agents and agentic AI
- •Report distinguishes individual AI agents from coordinated, multi-agent agentic AI systems
- •Governance requires precision as agentic systems increasingly operate with minimal human oversight
As AI systems evolve from simple chatbots into autonomous agents capable of independent action, the terminology used to describe them is becoming increasingly muddled. To address this, the OECD.AI expert group has introduced a foundational report designed to harmonize language across both technical and policy communities. The goal is simple: if we cannot define what 'agentic AI' does, we cannot govern it effectively.
The report draws a critical distinction between an AI agent—a system with limited autonomy that uses tools to complete specific goals—and agentic AI. Agentic AI refers to more complex ecosystems where multiple agents collaborate to decompose tasks and operate in unpredictable environments with minimal human intervention. Essentially, it shifts the focus from a single tool to a socio-technical paradigm where systems interact with humans and other machines.
This conceptual clarity serves as a precursor to future regulation. By anchoring the discussion in established definitions of AI systems, the OECD aims to provide a baseline for policymakers. As these systems move from experimentation to real-world integration, understanding the risks associated with multi-agent coordination becomes essential for creating standards that ensure safety, privacy, and accountability.