Anthropic Designated Supply Chain Risk Over Military AI Policy
- •US Department of War designates Anthropic as a supply chain risk following failed policy negotiations.
- •Anthropic refuses to allow Claude for mass domestic surveillance and fully autonomous lethal weaponry.
- •Legal challenge planned as Anthropic argues the Secretary lacks authority over commercial AI contracts.
The friction between the burgeoning AI sector and national security has hit a historic impasse. In an unprecedented move, Secretary of War Pete Hegseth announced plans to designate Anthropic as a supply chain risk. This friction stems from Anthropic's refusal to waive two specific safety and ethical constraints regarding the use of its Claude models: a ban on mass domestic surveillance and the prohibition of fully autonomous weapons.
Anthropic leadership argues that current frontier models lack the necessary reliability for autonomous lethal operations, citing risks to both warfighters and civilians. Furthermore, the company maintains that domestic surveillance using AI would constitute a fundamental violation of American civil rights. Despite these disagreements, Anthropic highlighted its history of supporting US classified networks since 2024, emphasizing that it supports all other lawful national security applications.
From a legal standpoint, Anthropic plans to challenge the designation in court. They argue that the Secretary’s authority is limited to military-specific contracts and cannot be used to intimidate commercial partners or individual users. While this sets a tense precedent for any American company that negotiates with the government, the company assured its commercial and API customers that their access to Claude remains unaffected by this specific governmental friction.