Experts Push for Standardized AI Assurance and Monitoring
- •Experts urge shift from pre-deployment testing to continuous post-deployment monitoring for AI
- •Low demand for independent AI assurance persists due to regulatory uncertainty and proprietary concerns
- •Rapid adoption of autonomous agentic systems is outstripping the development of formal safety standards
The deployment of AI across the global economy has reached a critical inflection point where simple trust is no longer sufficient. At the AI Standards Hub Global Summit in Glasgow, experts from the Partnership on AI (PAI) and the UK’s National Physical Laboratory argued for a shift toward "calibrated trust." This approach requires a clear-eyed understanding of a system's specific capabilities and limitations through rigorous, independent verification.
One of the most pressing challenges identified is that assurance often stops once a model is deployed. While pre-deployment testing is common, continuous post-deployment monitoring remains the least utilized service in the ecosystem. As we move toward agentic AI—systems capable of taking real-world actions autonomously—the risk of unseen failures in planning or execution grows. Real-time failure detection is becoming a foundational requirement rather than an optional safeguard.
However, the market for independent assurance is currently stalled by a lack of clear regulatory incentives and fears over exposing proprietary trade secrets. Survey data from the summit suggests that 46% of practitioners believe new legislation is the most effective lever to drive demand. Without standardized frameworks, frontier models—those with potentially catastrophic risks—may reach the public without the necessary oversight to ensure they are safe and effective.