Court Halts Pentagon's 'Supply Chain Risk' Label for Anthropic
- •Court blocks Pentagon from labeling Anthropic a 'supply chain risk' during active litigation.
- •Judge cites First Amendment retaliation following Anthropic’s refusal to support autonomous lethal weapons.
- •The designation, usually for foreign adversaries, threatened Anthropic’s ability to work with federal contractors.
A federal judge has issued a temporary injunction against the U.S. Department of Defense, preventing it from designating the AI startup Anthropic as a 'supply chain risk.' This rare legal move follows a breakdown in negotiations between the two parties over a $200 million contract. Anthropic alleged that the Pentagon attempted to punish the company after it refused to provide unrestricted access to its AI models for applications involving mass surveillance and autonomous lethal weaponry.
District Judge Rita Lin described the government's actions as 'classic illegal First Amendment retaliation,' noting that the 'supply chain risk' label—typically reserved for foreign adversaries—was used arbitrarily against an American firm. The ruling emphasizes that the government cannot brand a domestic company as a potential saboteur simply for expressing disagreement with contracting terms or public policy.
This case highlights a growing friction between AI developers prioritizing safety frameworks and defense agencies seeking advanced capabilities. While the injunction is temporary, it sets a significant precedent regarding how national security labels can be applied to domestic technology providers. The government has been granted a seven-day window to appeal the decision before the injunction takes full effect.