Anthropic Denies AI ‘Kill Switch’ Amid Pentagon Ban – Trend Star Digital

Anthropic Denies AI ‘Kill Switch’ Amid Pentagon Ban

Anthropic is aggressively challenging a Department of Defense ban, with the AI firm’s leadership formally denying that it possesses a “kill switch” or any mechanism to sabotage military operations powered by its Claude models. The legal battle follows Defense Secretary Pete Hegseth’s recent designation of the startup as a supply-chain risk—a move that effectively bars the Pentagon and its contractors from utilizing Anthropic’s technology in the coming months.

Debunking the Remote Sabotage Allegations

Thiyagu Ramasamy, Anthropic’s head of public sector, explicitly rejected claims that the company could interfere with national security tools. According to Ramasamy, Anthropic has never possessed the capability to disable Claude, alter its core functionality, or jeopardize active military operations. He emphasized that the company lacks the necessary access to modify model behavior once deployed within government systems.

“Anthropic does not maintain any back door or remote ‘kill switch,’” Ramasamy stated in a formal filing. He clarified that personnel cannot log into Department of War (DoW) systems to disable models during live operations, noting that the underlying technology simply does not support such external interference.

A High-Stakes Legal Battle in San Francisco

The conflict has escalated into the judicial system, with Anthropic filing two lawsuits challenging the constitutionality of the Pentagon’s ban. The company is currently seeking an emergency order to halt the restriction, as customers have already begun terminating existing contracts. A federal district court in San Francisco has scheduled a pivotal hearing for March 24, which could result in a temporary reversal of the government’s decision.

The Department of Defense remains steadfast in its position. Government attorneys argued in recent filings that the military is not obligated to accept the risk that “critical military systems will be jeopardized at pivotal moments” for national defense. This stance stems from fears that Anthropic could theoretically push harmful updates or terminate access if the company disagreed with specific military applications.

See also  TikTok Becomes a Viral Showroom for Chinese Anti-Drone Weapons in Global Conflicts

The Breakdown of Contractual Safeguards

Internal reports indicate that the Pentagon has utilized Claude for high-level tasks, including data analysis, memo drafting, and the generation of strategic battle plans. To alleviate government concerns, Anthropic executives proposed specific contractual guarantees. Sarah Heck, Anthropic’s head of policy, revealed that the company offered a license agreement on March 4 that would explicitly waive any right to veto lawful military operational decisions.

Furthermore, Anthropic expressed a willingness to accept language addressing concerns about “deadly strikes” conducted without human oversight. Despite these concessions, negotiations between the AI lab and defense officials ultimately collapsed, leading to the current stalemate and the “supply-chain risk” label.

Cloud Infrastructure and Data Privacy

Addressing the technical logistics of updates, Ramasamy noted that any modifications to the software require the dual approval of the government and the cloud service provider. While not named directly in the filings, this refers to Amazon Web Services (AWS), which hosts the infrastructure. Ramasamy also confirmed that Anthropic remains blind to the specific prompts or sensitive data entered into Claude by military personnel.

For now, the Department of Defense is implementing alternative measures to secure its AI pipeline. Court documents show the Pentagon is working closely with third-party cloud providers to ensure that Anthropic leadership cannot execute unilateral changes to the Claude systems currently integrated into the military’s digital framework.