Anthropic's AI Dilemma: A Self-Made Crisis?

Updated: March 2, 2026

Mike Langley

Written by Mike Langley

Managing Editor

Esther Mendoza

Edited by Esther Mendoza

Head of Content, Investing & Taxes

Anthropic's AI Dilemma: A Self-Made Crisis?

On Friday afternoon, as I began an interview, a news alert appeared on my screen: the Trump administration announced it was cutting off ties with Anthropic, a San Francisco-based AI company established by Dario Amodei in 2021. Defense Secretary Pete Hegseth quickly cited a national security law to blacklist Anthropic from Pentagon contracts, following Amodei's refusal to allow the use of Anthropic's AI for mass surveillance of U.S. citizens or for autonomous armed drones capable of selecting and eliminating targets without human intervention. This unexpected series of events could cost Anthropic a potential $200 million contract and lead to its exclusion from future defense collaborations. President Trump posted on Truth Social, instructing federal agencies to "immediately cease all use of Anthropic technology." In response, Anthropic has declared its intention to legally challenge the Pentagon's decision.

For nearly a decade, Max Tegmark, an MIT physicist and founder of the Future of Life Institute, has cautioned that the rapid advancement of AI technology is surpassing global regulatory capabilities. In 2023, he helped organize an open letter, signed by over 33,000 individuals, including Elon Musk, calling for a pause in advanced AI development. Tegmark views Anthropic's current predicament as largely self-inflicted, stemming from an industry-wide resistance to regulation. Companies like Anthropic, OpenAI, Google DeepMind, and others have long claimed to self-regulate, but Anthropic recently abandoned a key safety pledge, which promised not to release increasingly powerful AI without confidence in its safety.

In the absence of clear regulations, these companies find themselves vulnerable, Tegmark argues. During a recent interview, he expressed his thoughts on Anthropic's situation, noting how the initial excitement about AI's potential to advance healthcare and strengthen America has shifted to governmental disputes over its use for surveillance and autonomous weaponry.

Anthropic has built its brand on being a safety-first AI company, despite collaborating with defense and intelligence sectors since at least 2024. Tegmark finds this contradictory, suggesting that while companies claim to prioritize safety, none have actively supported binding regulations akin to those in other industries. They have, in fact, backtracked on their promises: Google abandoned its "Don't be evil" motto, OpenAI removed "safety" from its mission statement, and Anthropic dropped its safety commitment.

These companies have consistently lobbied against AI regulation, advocating for self-governance. Tegmark points out the irony that there is more regulation on sandwich shops than on AI systems, as companies resisted turning voluntary safety commitments into enforceable laws. This regulatory vacuum could lead to disastrous outcomes, comparable to historical corporate malpractices.

The AI industry's common justification is the competitive race with China. However, Tegmark argues this is flawed, as China is taking measures to restrict AI developments that it perceives as harmful. He challenges the notion that AI superintelligence is an asset, suggesting it poses a national security threat if uncontrollable.

The rapid pace of AI development has surprised many experts, with systems advancing faster than anticipated. Tegmark warns that this swift progression could soon impact job markets. As Anthropic faces its current challenges, the response from other AI giants remains uncertain. OpenAI's Sam Altman has publicly supported Anthropic's stance, while Google and xAI have yet to comment.

Despite the current turmoil, Tegmark sees potential for a positive outcome if AI companies are subjected to the same regulations as other industries, requiring proof of safety before deploying powerful technologies. Such measures could usher in a new era of AI innovation free from existential concerns, although this is not the current trajectory.