OpenAI Negotiates Major Defense Pact Following Anthropic's Government Ban
San Francisco, Saturday, 28 February 2026.
OpenAI is securing a defense agreement as the Pentagon reportedly accepts safety ‘red lines’ from Sam Altman that it previously rejected as ‘woke’ when proposed by banned rival Anthropic.
A Strategic Pivot in Defense Alliances
In a rapid realignment of national security partnerships, OpenAI is finalizing a major agreement with the U.S. Department of War, effectively stepping into the vacuum left by its rival. This development comes immediately after the Pentagon designated Anthropic a supply chain risk, severing ties over ethical disputes regarding autonomous weaponry [5]. While negotiations are still ongoing and a final contract remained unsigned as of February 25, OpenAI CEO Sam Altman confirmed to employees that the government is willing to accept the company’s specific safety protocols [1]. This willingness marks a distinct shift in the Pentagon’s posture, as similar stipulations proposed by Anthropic were previously rejected, leading to the cancellation of their federal contracts [1].
The ‘Red Lines’ Double Standard
The core of this new arrangement rests on what the industry terms ‘red lines’—technical and ethical boundaries governing AI application. OpenAI has stipulated that its models must not be used for autonomous lethal weapons, domestic mass surveillance, or critical decision-making without human oversight [1]. According to Altman, the Department of War has agreed to reflect these principles in both law and policy within the new agreement [2]. This acceptance highlights a stark inconsistency in the administration’s approach; defense officials had previously disparaged identical limitations proposed by Anthropic as philosophical and “woke” [2]. Under the emerging terms, OpenAI will retain control over technical safeguards and restrict model deployment strictly to cloud environments, explicitly avoiding “edge systems” that could operate independently in the field [1][2].
Executive Directive and Legal Fallout
The consolidation of OpenAI’s position coincides with an aggressive executive crackdown on its competitor. On Friday, February 27, President Donald Trump issued a directive ordering every federal agency to immediately cease all use of Anthropic’s technology, declaring, “We don’t need it, we don’t want it, and will not do business with them again” [1][4]. While most agencies are required to halt usage immediately, the Pentagon has been granted a six-month phase-out period to transition away from Anthropic’s models embedded in military platforms [4]. In response to the blacklisting and the administration’s refusal to honor its usage limitations, Anthropic announced on Friday that it intends to sue the Pentagon [2]. The firm faces threats of major civil and criminal consequences if it fails to assist during the mandated phase-out period [4].
Internal Dissent and Market Implications
Despite the lucrative prospects of the defense deal, the situation has generated significant friction within OpenAI. In a show of industry solidarity, 70 OpenAI staff members signed an open letter titled “We Will Not Be Divided,” supporting their counterparts at Anthropic [3]. Altman attempted to address these concerns in a memo on February 26, stating his goal is to “de-escalate” tensions and arguing that engaging with the Pentagon allows OpenAI to enforce safety norms from within the room [2][3]. Meanwhile, the market landscape is shifting rapidly; while OpenAI secures its foothold, analysts suggest Elon Musk’s Grok chatbot may also benefit from Anthropic’s exclusion [4]. OpenAI previously secured a $200 million contract with the Department of Defense in 2025, but this new agreement would significantly deepen its integration into national security infrastructure [3].