Anthropic Blacklisted from Federal Use Following Refusal to Lift AI Safety Restrictions

Anthropic Blacklisted from Federal Use Following Refusal to Lift AI Safety Restrictions

2026-02-28 companies

San Francisco, Saturday, 28 February 2026.
President Trump ordered a federal ban on Anthropic after the firm refused to lift safeguards against mass surveillance, prompting the Pentagon to designate the company a “supply chain risk.”

Escalation to Executive Ban

The standoff between the White House and the artificial intelligence firm Anthropic culminated on Friday, February 27, 2026, with President Donald Trump issuing a directive for all federal agencies to immediately cease using the company’s technology [6]. This executive order arrived alongside a move by Defense Secretary Pete Hegseth to formally designate Anthropic as a “supply chain risk” to national security, effectively blacklisting the firm from future military contracts [6]. The administration’s aggressive response followed Anthropic CEO Dario Amodei’s refusal to accept Pentagon demands that would have removed restrictions on the use of its AI models for mass domestic surveillance and autonomous weapons [1][4]. The Department of Defense has initiated a six-month phaseout period to transition away from Anthropic’s services to other providers [6].

The Core Dispute: Ethical Safeguards vs. Military Necessity

At the heart of this conflict is a fundamental disagreement regarding the governance of frontier AI systems. On Thursday, February 26, Amodei stated that the company could not “in good conscience accede” to the Pentagon’s requirements, citing new contract language that failed to prevent the use of their Claude model for mass surveillance of Americans or fully autonomous weaponry [1][3]. While Pentagon spokesman Sean Parnell asserted that the military has “no interest” in illegal surveillance or human-free autonomous weapons, the Department insisted on contract terms allowing for “all lawful purposes,” which Anthropic viewed as a loophole that could bypass their safety protocols [3][5]. Amodei argued that while the company supports foreign intelligence analysis, certain applications undermine democratic values or exceed the reliability of current technology [2][4].

Ultimatums and Contradictory Measures

The breakdown in negotiations was preceded by a week of intensifying pressure. Defense Secretary Hegseth had issued an ultimatum requiring Anthropic to comply by 17:01 ET on Friday, February 27, or face contract termination [4][7]. Tensions flared publicly when Emil Michael, the Under Secretary of Defense for Research and Engineering, took to social media on Thursday to accuse Amodei of having a “God-complex” and risking national safety [5]. Complicating the legal landscape, the Pentagon threatened to invoke the Defense Production Act (DPA) to compel Anthropic to provide its services, a move legal experts and Amodei noted was inherently contradictory to simultaneously labeling the firm a security risk [3][7]. As Amodei pointed out, one measure deemed the technology essential to national defense, while the other categorized the vendor as a threat comparable to foreign adversaries [3][4].

Financial Leverage and Industry Realignments

From a financial perspective, Anthropic appears positioned to absorb the loss of government revenue. The disputed contract is valued at approximately $200 million [4][8]. However, with the company’s valuation reaching $380 billion earlier this month [8], the military contract represents a mere 0.053% of the company’s implied market value. This massive capital cushion likely emboldened Anthropic to hold its ethical line despite the reputational pressure. Meanwhile, competitors are moving to fill the void; Elon Musk’s xAI has already secured approval for classified use, and agreements with Google and OpenAI are reportedly close [7]. While OpenAI CEO Sam Altman expressed support for Anthropic’s “red lines” regarding surveillance and autonomous weapons, he confirmed on Friday that his company is actively negotiating with the Pentagon to deploy its models within legal bounds [6][8].

Sources


Artificial Intelligence Defense Contracts