Pentagon Designates Anthropic a Security Risk Amid Clash Over AI Ethics
Washington D.C., Saturday, 28 February 2026.
Marking an action typically reserved for foreign adversaries, the Department of War designated US-based Anthropic a supply chain risk yesterday following a stalemate over autonomous weapons and surveillance.
Unprecedented Restrictions on Defense Contractors
On Friday, February 27, 2026, Secretary of War Pete Hegseth formalized this status, declaring Anthropic a “supply chain risk to national security.” [2] The directive imposes strict limitations on the defense industrial base, mandating that no partner, supplier, or contractor engaging with the United States military may conduct commercial activity with the AI firm effective immediately. [2] This escalation follows a breakdown in negotiations where the Pentagon demanded the use of Anthropic’s Claude model for “all lawful purposes,” a stipulation the company rejected due to safety concerns regarding mass domestic surveillance and autonomous weaponry. [1][2]
Executive Action and Deadlines
Coinciding with the Secretary’s announcement, President Trump issued an order yesterday requiring all federal agencies to immediately cease using Anthropic’s AI systems. [2] While the Department of War and select agencies have been granted a transition window of up to six months to migrate off the platform, the deadline for Anthropic to capitulate to the Pentagon’s terms passed at 17:01 on February 27. [2] The administration’s aggressive stance signals a zero-tolerance policy for software providers that do not align fully with the Department’s operational requirements.
The Core of the Ethical Impasse
The schism stems from a fundamental disagreement over the implementation of safeguards in defense technology. Anthropic has explicitly refused to deploy its AI for mass domestic surveillance or fully autonomous weapons systems, arguing that such applications pose risks to fundamental rights and exceed the reliability of current technology. [1][2] Secretary Hegseth characterized these ethical guardrails as “sanctimonious,” asserting that the American military would not be “held hostage by the ideological whims of Big Tech.” [2] Conversely, Anthropic CEO Dario Amodei has maintained that certain AI applications undermine democratic values and fall outside the bounds of safe technological capabilities. [2]
Legal Battles and Industry Shifts
Anthropic, which has supported American warfighters since June 2024 and was the first frontier AI company to deploy models on classified networks, condemned the designation as legally unsound. [1] The company noted that supply chain risk designations are tools historically reserved for foreign adversaries, not American companies, and has vowed to challenge the ruling in court. [1][2] While Anthropic interprets the relevant statute, 10 USC 3252, as applying strictly to Department of War contract work, Secretary Hegseth’s comments imply a broader commercial ban for any entity doing business with the military. [1]
Realignment of the AI Supply Chain
As the Pentagon moves to excise Anthropic from its infrastructure, it is simultaneously solidifying ties with competitors willing to meet its terms. Just as the restrictions on Anthropic were announced yesterday, OpenAI CEO Sam Altman confirmed that his company had reached an agreement to deploy its models within the Department of War’s classified networks. [2] This rapid realignment underscores the government’s intent to enforce strict control over its software supply chain, prioritizing vendors that accept the military’s definitions of lawful use over those adhering to independent ethical restrictions. [1][2]