Coding Agents Challenge the Boring Technology Principle
- •Coding agents use large context windows to master new tools via real-time documentation.
- •Human technical choices override model biases toward popular software stacks like Stripe or GitHub.
- •Standardized 'Skills' from major platforms help agents integrate with modern, fast-moving ecosystems.
Simon Willison (co-creator of Django and prominent tech blogger) explores a significant shift in how AI-assisted programming impacts technical decision-making. Historically, developers were encouraged to "choose boring technology"—mature, well-documented tools with extensive online presence. This was largely because early models performed significantly better on widely used languages like Python or JavaScript, which dominated their training datasets.
However, the advent of sophisticated coding agents and expanded context windows (the amount of information a model can process at once) is changing this dynamic. Modern agents can now ingest real-time documentation or private codebase patterns on the fly. By simply prompting an agent to read a tool's "help" output or local examples, developers can effectively "teach" the model to work with brand-new or niche technologies via in-context learning—the ability of a model to adapt to new info within a prompt.
Recent studies into tools like Claude Code (a command-line interface for coding) suggest that while AI might exhibit a "near monopoly" bias toward specific stacks like Stripe or shadcn/ui, human intervention remains the deciding factor. The rapid adoption of standardized "Skills"—pre-packaged capabilities released by platforms like Supabase and Vercel—further enables agents to bridge the gap between training data and cutting-edge software development.