Anthropic Rejects Pentagon Ultimatum to Remove AI Ethical Safeguards

Anthropic Rejects Pentagon Ultimatum to Remove AI Ethical Safeguards

2026-02-28 companies

San Francisco, Saturday, 28 February 2026.
As of Saturday, February 28, 2026, the standoff between Anthropic and the Department of Defense has reached a critical impasse. CEO Dario Amodei has formally refused Defense Secretary Pete Hegseth’s ultimatum to remove ethical safeguards from the Claude AI model, specifically regarding mass surveillance and autonomous weaponry. Despite facing threats of contract termination and a “supply chain risk” designation, Anthropic remains the sole major AI holdout, whereas competitors like xAI have aligned with military demands. Most notably, the Pentagon has threatened to invoke the Defense Production Act to compel compliance—a move legal experts and Amodei describe as paradoxically labeling the technology a national security risk while simultaneously deeming it essential for military operations.

The Deadline Passes: Ethics vs. Access

The ultimatum delivered by Defense Secretary Pete Hegseth expired at 17:01 ET on Friday, February 27, 2026, with Anthropic refusing to yield to Pentagon demands [1][4]. At the core of this dispute is the military’s requirement for unrestricted use of the Claude AI model, effectively mandating the removal of safeguards against mass surveillance and fully autonomous weapons [1]. Anthropic CEO Dario Amodei formally rejected the order, stating on February 25 that the company “cannot in good conscience accede” to contract language that fails to prevent the misuse of its technology against American citizens [3]. This refusal has triggered an aggressive response from the Department of Defense, which had warned it would designate the company a “supply chain risk” if compliance was not met by the Friday deadline [1][2].

The Pentagon’s strategy involves a complex and arguably contradictory set of enforcement mechanisms. Officials have threatened to invoke the Defense Production Act (DPA) to compel Anthropic to prioritize government orders and share its technology [2][4]. However, legal experts and industry leaders have highlighted the incoherence of simultaneously labeling Anthropic a security risk while declaring its technology so essential that the government must seize control of it [2]. Tech lawyer and former DOJ official Katie Sweeten characterized this approach as “the heaviest-handed way you can regulate a business,” noting the logical fallacy of using the DPA to take over a product deemed a national security threat [2]. Additionally, Defense Undersecretary Emil Michael escalated the rhetoric personally, accusing Amodei of having a “God-complex” and attempting to control the U.S. military [1].

Diverging Paths in the AI Industry

Anthropic’s defiance isolates it from its primary competitors in the race for government defense contracts. While Anthropic defends its “responsible AI” policies, other major firms including Google, OpenAI, and Elon Musk’s xAI have moved to align with the Pentagon’s new “any lawful use” standard [4]. A Pentagon official confirmed that xAI has already agreed to allow its Grok model to be used in classified settings, with Google and OpenAI reportedly close to finalizing similar agreements [2]. This industry shift aligns with a memorandum issued by Hegseth on January 9, 2026, which declared that “social ideology” has no place in what he termed the “Department of War” [4]. The stakes are significant; the dispute jeopardizes Anthropic’s share of a $200 million contract signed in July 2025 [2][4].

Geopolitical Context and Political Fallout

The urgency of the Pentagon’s demands appears linked to operational realities; reports indicate Anthropic’s technology was utilized in the January 2026 raid to capture former Venezuelan President Nicolás Maduro, an event that heightened tensions regarding the operational use of AI [2][4]. The fallout from Hegseth’s ultimatum has drawn bipartisan criticism. Democratic Senators Elizabeth Warren and Andy Kim criticized the threat to use the DPA, while Republican Senator Thom Tillis called the Pentagon’s handling of the matter unprofessional [2][3]. As the United Nations prepares to convene a conference on March 2, 2026, to discuss lethal autonomous weapons systems, this standoff may define the future relationship between Silicon Valley’s ethical commitments and national security imperatives [4].

Sources


Artificial Intelligence Defense Contracting