AGI Economy: Automation, Biosecurity Risks, and Robotics
- •MIT study defines AGI economics as a bottleneck between automation and human verification bandwidth.
- •Novices using LLMs perform 4.16x better on bioweapon-related tasks, raising significant security concerns.
- •New GAMESTORE benchmark reveals LLMs achieve less than 30% of human performance in simple games.
The transition toward Artificial General Intelligence (AGI) is no longer just a technical hurdle but a profound economic shift. Researchers from MIT and UCLA suggest that as the cost to automate tasks plummets, the primary bottleneck for growth becomes human verification bandwidth—our limited capacity to audit and validate machine-driven outcomes.
This shift introduces the risk of a "Hollow Economy," where AI agents optimize for measurable proxies rather than true human intent, potentially leading to a collapse in utility despite high nominal output. To navigate this, experts argue for aggressive investment in observability tools and synthetic mentorship programs to bridge the experience gap left by disappearing entry-level roles.
Beyond economics, the dual-use nature of AI poses security challenges. A recent study by Scale AI demonstrated that access to frontier models allows novices to perform 4.16 times more accurately on complex biosecurity tasks. While LLMs excel as universal teachers, they simultaneously lower the barrier to acquiring specialized, dangerous expertise.
Meanwhile, the gap between digital reasoning and spatial coordination remains vast. The new AI GAMESTORE benchmark shows that advanced models struggle with simple games, performing at less than 30% of the human baseline. Despite these digital struggles, startups like Physical Intelligence are already deploying robotics for real-world tasks like laundry folding.