Anthropic Stands Its Ground Against Pentagon on AI Usage Policies as Deadline Nears

Updated: February 28, 2026

Esther Mendoza

Written by Esther Mendoza

Head of Content, Investing & Taxes

Mike Langley

Edited by Mike Langley

Managing Editor

Anthropic Stands Its Ground Against Pentagon on AI Usage Policies as Deadline Nears

Anthropic's CEO, Dario Amodei, has reiterated the company's unwavering stance against the Pentagon's use of its AI technology for mass domestic surveillance and fully autonomous weapons systems. As the deadline approaches, Amodei emphasizes that current AI systems, while integrated into defense and intelligence operations, are not reliable enough to replace human decision-making entirely.

The Pentagon, however, insists that existing laws and guidelines are adequate and has resisted providing further written assurances. In a public escalation, Chief Technology Officer Emil Michael criticized Amodei, accusing him of dishonesty and arrogance.

Amodei highlights Anthropic's pioneering role in deploying AI models within government and national security sectors. Despite widespread use of their AI model Claude in intelligence and mission planning, Amodei maintains that removing humans from critical military decision loops is premature due to AI's current limitations. Anthropic has offered to collaborate with the Pentagon on enhancing AI reliability, but these overtures have been declined.

On the issue of domestic surveillance, Amodei warns of AI's potential to compile detailed profiles from disparate data points on a large scale. He also points out the contradiction in the Pentagon's stance: labeling Anthropic a security risk while simultaneously invoking the Defense Production Act to deem it essential to national security.

Despite the looming deadline, Anthropic remains resolute. The company is prepared to ensure a seamless transition if the Pentagon decides to drop its systems. Anthropic asserts that it has sacrificed substantial revenue by severing ties with Chinese firms connected to the Communist Party and is advocating for strict chip export controls, which also strategically disadvantage Chinese competitors.

In response to the impasse, Pentagon technology chief Emil Michael claims the military has offered significant concessions, such as acknowledging laws against domestic surveillance and offering Anthropic a role on an AI ethics board. However, Anthropic finds these concessions inadequate.

The Pentagon's refusal to provide explicit guarantees against using Anthropic's AI for mass surveillance or autonomous weapons is justified by existing laws and policies, according to Michael. He stresses the importance of preparing for potential AI advancements by nations like China.

As the deadline approaches, legal expert Alan Z. Rozenshtein explains the potential implications of the Defense Production Act. The law allows the government to compel companies to fulfill national defense needs, but the scope of this authority depends on the specific demands made by the Pentagon.

If the Pentagon demands the use of Claude without its current usage restrictions, they might have a strong case. However, if it requires retraining Claude or removing safety features, the legal ground becomes tenuous, potentially raising First Amendment issues.

Rozenshtein echoes Amodei's point about the contradiction in the Pentagon's approach, noting that treating Anthropic as both a security risk and an essential defense asset is inconsistent. If Anthropic resists compliance, it risks facing legal consequences, though it's likely the company would challenge any such order in court.