The U.S. Department of Justice (DOJ) intensified its legal battle against Anthropic on Tuesday, filing a federal court response in San Francisco that characterizes the AI startup as a potential threat to national security. Government attorneys argued that the company’s ethical constraints could lead to the deliberate subversion of military operations, justifying a Pentagon-imposed label that effectively bars Anthropic from lucrative defense contracts.
The Allegation of “Corporate Sabotage”
In a pointed legal brief, the DOJ defended Defense Secretary Pete Hegseth’s determination that Anthropic’s staff might “sabotage, maliciously introduce unwanted function, or otherwise subvert” national security systems. The government’s core concern centers on Anthropic’s “red lines”—internal ethical guidelines regarding how its technology is deployed. Federal officials fear the company could unilaterally disable or alter its Claude AI models during active warfighting operations if military use conflicts with corporate policy.
“AI systems are acutely vulnerable to manipulation,” the filing states, asserting that allowing Anthropic continued access to the Department of Defense (DoD) infrastructure introduces “unacceptable risk” into the military supply chain. The government maintains that the First Amendment does not grant a private entity the right to impose its own contract terms on federal agencies while accessing sensitive warfighting systems.
Billions in Revenue at Stake Amid Legal Deadlock
Anthropic is currently challenging the Trump administration’s authority to apply these restrictive labels, arguing the measures constitute illegal retaliation. The financial implications are massive; the AI firm stands to lose billions of dollars in projected revenue this year if it remains excluded from defense procurement. Anthropic has petitioned the court to allow it to resume normal business operations until the litigation concludes.
Judge Rita Lin, presiding over the San Francisco case, has scheduled a critical hearing for next Tuesday to decide on Anthropic’s request for a reprieve. While the DOJ dismisses the company’s financial concerns as “legally insufficient” to prove irreparable injury, Anthropic maintains that the administration has overstepped its legal bounds.
The Ethical Divide: Surveillance and Autonomy
The friction stems from Anthropic’s refusal to allow its Claude AI models to facilitate broad surveillance of American citizens or power fully autonomous weaponry. The company maintains that its technology is not yet reliable enough for such high-stakes applications. However, the Pentagon views this stance as a sign of a “rogue” contractor that cannot be relied upon in the heat of combat.
The Race to Replace Claude AI
The Department of Defense is already moving to sever its reliance on Anthropic. Currently, Claude is the only AI model cleared for use on the department’s classified systems—often integrated through Palantir’s data analysis software. Because the Pentagon cannot “simply flip a switch” during ongoing combat operations, it is aggressively working to deploy alternative systems from competitors, including:
- Google: Accelerating deployment of Gemini-based tools.
- OpenAI: Integrating GPT-4o for strategic analysis.
- xAI: Exploring Grok’s utility in defense logistics.
Despite the government’s hardline stance, Anthropic has garnered support from a diverse coalition. Microsoft, AI researchers, federal labor unions, and former military leaders have all filed briefs in support of the company. To date, no third-party groups have filed in support of the DOJ’s position. Anthropic has until Friday to submit its counter-response before the parties meet in court.
