
OpenAI's recent collaboration with the Department of Defense has sparked significant discussion, with CEO Sam Altman acknowledging the deal was "definitely rushed" and potentially problematic in appearance. This comes after Anthropic's negotiations with the Pentagon fell apart, leading to President Donald Trump instructing federal agencies to phase out Anthropic's technology over six months. Secretary of Defense Pete Hegseth labeled Anthropic as a supply-chain risk, setting the stage for OpenAI's swift announcement of its own agreement to deploy models in classified settings.
Both Anthropic and OpenAI have expressed firm boundaries against the use of their technologies in autonomous weaponry and extensive domestic surveillance. However, questions have arisen over OpenAI's transparency regarding its safety measures and how it succeeded where Anthropic did not. In response, OpenAI published a detailed blog post outlining its position. The post highlighted that OpenAI's models would not be used for mass domestic surveillance, autonomous weapon systems, or high-stakes automated decisions like social credit systems. Unlike other AI companies that may have weakened their safety protocols, OpenAI claimed to maintain robust safeguards through a comprehensive, layered approach.
The blog emphasized, "We retain full discretion over our safety stack, deploy via cloud, involve cleared OpenAI personnel, and uphold strong contractual protections," alongside existing legal protections in the U.S.
Following the blog's publication, Techdirt's Mike Masnick critiqued the agreement, suggesting it could enable domestic surveillance by adhering to Executive Order 12333, which he argued allows the NSA to conduct surveillance by intercepting communications outside the U.S., even if they include information on U.S. persons.
Katrina Mulligan, OpenAI's head of national security partnerships, addressed these concerns on LinkedIn, stating that the debate often overlooks the importance of deployment architecture over contract language. She explained that limiting deployment to cloud API ensures models cannot be directly integrated into weapons or operational hardware.
On social media platform X, Altman further explained the rushed nature of the deal and acknowledged the backlash OpenAI faced, noting Anthropic's Claude surpassing OpenAI's ChatGPT in Apple's App Store rankings. Altman justified the agreement by expressing a desire to de-escalate tensions between the Department of Defense and the tech industry, hoping the deal would ultimately prove beneficial. "If we are right," Altman said, "this could lead to a de-escalation and show our commitment to helping the industry, despite the initial criticism."