
In a significant development, OpenAI has inked a deal with the United States Department of Defense to deploy its artificial intelligence models on classified networks, allowing all legal applications under the agreement. This partnership was finalized shortly after President Trump directed federal agencies to discontinue the use of technology from Anthropic, a competing AI firm.
While Anthropic resisted allowing its technology to be used for mass surveillance and autonomous weapons, OpenAI's CEO, Sam Altman, agreed to permit all lawful uses. However, he ensured the inclusion of technical safeguards in the contract. According to Altman, the Department of Defense showed a "deep respect for safety" and expressed a desire to partner for optimal outcomes.
The timing of OpenAI's agreement is notable, as it comes on the heels of failed negotiations between Anthropic and the Pentagon. Anthropic was negotiating a $200 million contract but refused to relax its restrictions on the use of its AI for surveillance and weaponry, which the Pentagon deemed unacceptable. The deadline for an agreement passed without resolution, prompting Defense Secretary Pete Hegseth to label Anthropic a "supply chain risk to national security."
President Trump criticized Anthropic on Truth Social, declaring, "WE will decide the fate of our country — NOT some out-of-control, Radical Left AI company." In contrast, OpenAI adopted a different strategy, negotiating terms that allowed for all legal uses while incorporating safeguards. Altman stated that OpenAI's models would operate exclusively on cloud networks, avoiding deployment in edge environments like autonomous weapon systems.
OpenAI is committed to embedding its engineers alongside government personnel for classified projects to ensure system security. Altman also urged the Pentagon to offer similar terms to other AI companies, suggesting a broad, industry-wide acceptance of these conditions.
OpenAI's approach appears to have secured a deal without engaging in the political contention that undermined Anthropic's efforts. Altman emphasized the importance of doing the "right thing" rather than taking an easy path that might appear strong but lacks sincerity.
The distinctions between OpenAI and Anthropic's stances could have significant implications. Anthropic insisted on "no fully autonomous weapons without human oversight," requiring active human involvement before deploying weapons. In contrast, Altman speaks of "human responsibility for the use of force," a more flexible concept that could apply after deployment.
Anthropic also argued that current AI models are not reliable enough for use in fully autonomous weapons, positing that this could endanger both military personnel and civilians. Their stance on domestic mass surveillance also raises questions about the degree of involvement AI models have in analyzing pre-collected data.
In response to the ban, Anthropic plans to challenge its "supply chain risk" designation in court, maintaining its opposition to mass surveillance and autonomous weapons. They stated, "No amount of intimidation or punishment from the Department of War will change our position."
This situation highlights the diverging paths of two major AI companies in relation to government contracts and national security considerations.