Simon Willison’s 2026 Predictions: The End of Manual Coding
- •Prediction that LLM-generated code will become undeniably high-quality in 2026 due to models being trained through Reinforcement Learning.
- •The urgent necessity of solving sandboxing using technologies like WebAssembly to mitigate security risks from autonomous coding agents.
- •A warning regarding a potential major security breach caused by the 'Normalization of Deviance' in AI agent deployment.
- •Analysis of the Jevons Paradox in software engineering, questioning if AI will devalue developer roles or massively increase software demand.
- •A long-term forecast that manual typing of code will become obsolete within six years, shifting the developer's role to architectural oversight.
Simon Willison outlines a transformative vision for the tech industry over the next six years, centered on the rapid evolution of coding agents. Following the release of major models like Claude Opus 4.5 and GPT-5.2 in late 2025, the quality of AI-generated code has reached a threshold where manual intervention is becoming minimal for expert programmers. This shift is largely attributed to models being trained via Reinforcement Learning against verifiable code benchmarks, allowing for sophisticated reasoning and significantly fewer errors. However, this rapid adoption brings significant risks. Willison warns of a possible 'Challenger disaster' in AI security, where the 'Normalization of Deviance' leads users to run unvetted code in insecure environments. To counter this, he predicts a surge in robust sandboxing solutions utilizing containers and WebAssembly. Furthermore, he posits that the Jevons Paradox will soon resolve for software engineering—either the efficiency gains will crash the market for developers, or the plummeting cost of code will trigger an explosion in custom software demand. Ultimately, the role of the engineer will shift from syntax-level typing to high-level architectural oversight as manual coding becomes a relic of the past, much like punch cards.