Pentagon Bans Anthropic Over Refusal to Remove Guardrails
- •Pentagon bans Anthropic after company refuses to remove guardrails on lethal autonomous operations and surveillance.
- •Secretary Pete Hegseth designates Anthropic a national security supply-chain risk, forcing a six-month government phase-out.
- •CEO Dario Amodei defends safety policies, losing a $200 million contract for customized Claude military models.
In a move signaling a tectonic shift in relations between Silicon Valley and the state, the Department of Defense has blacklisted Anthropic. Secretary Pete Hegseth labeled the firm a "Supply-Chain Risk," citing its refusal to strip safety filters preventing AI use in lethal autonomous operations or mass surveillance. The friction stems from a philosophical divide: the government demands unrestricted access for defense, while the developer maintains its safety boundaries are non-negotiable ethical commitments.
This escalation terminates a $200 million contract to integrate customized Claude models into classified military systems. While competitors like xAI and Palantir have aligned with the Pentagon, Anthropic remains firm under CEO Dario Amodei. President Trump’s order necessitates a six-month phase-out across federal agencies, leaving a vacuum in government infrastructure once deeply enmeshed with Claude’s capabilities.
The fallout highlights the tension between AI alignment—ensuring models follow human values—and the operational requirements of national defense. Anthropic’s stance marks a rare instance of a tech giant sacrificing a federal partnership to preserve its red lines. The decision forces the Armed Services Committee to address the disruption while setting a stark precedent for future public-private AI collaborations.