Anthropic Challenges US Department Of War's Supply Chain Risk Designation

hero dario close
In a week of unprecedented friction between Claude AI's Anthropic and the Pentagon, the federal government has now flagged the AI company as a national security threat, more specifically with a "supply-chain risk" label. This labeling is similar treatment for foreign adversaries like Huawei, and all because the company had become at odds with how the government plans to use AI in matters of surveillance and weapons.

It's no secret that Anthropic CEO Dario Amodei has refused to lift the guardrails that prevent his technology from being used for mass domestic surveillance or in autonomous weapons systems. Amodei argued that providing the military with unrestricted access to these capabilities would be "contrary to American values" during an interview on CBS News. In response, the Trump administration and Defense Secretary Pete Hegseth labeled the company "radical left" and issued a 5:00 p.m. ultimatum last Friday: grant any lawful use access or face a total government ban.

dow1
Secretary of War Pete Hegseth finishes the installation of a War Department plaque in front of the Pentagon

The timing of this blacklisting is also somewhat ironic. Even as the Department of War (the administration's rebranded title for the DoD) was moving to sever ties, U.S. Central Command was reportedly using Claude-powered systems to coordinate a massive wave of strikes against Iranian targets. Through a partnership with the data-analytics firm Palantir, Claude became a central component of the Maven targeting system. During the first 24 hours of "Operation Epic Fury," the AI reportedly suggested hundreds of targets and prioritized coordinates, collapsing what used to be weeks of strategic planning into real-time execution. Pentagon officials have admitted that while the ban is effective immediately, it may take six months to fully disentangle the AI from their classified networks. How convenient.

Adding fuel to the fire, the supply-chain risk designation was compounded by reports of a massive security breach involving Anthropic’s own tools. A threat actor allegedly used the company’s AI coding assistant, Claude Code, to bypass safeguards and execute one of the largest government data breaches in history, compromising nearly 195 million identities across Mexican federal agencies. While Anthropic maintains these are architectural vulnerabilities they are working to fix, the Pentagon has used the incident to bolster its claim that the company is a liability.

The vacuum left by Anthropic was filled almost instantly by OpenAI. Within hours of the ban, CEO Sam Altman, who last year predicted that smartphones were coming to an end in favor of AI wearables, announced a major deal to supply the Pentagon with AI for classified networks. While Anthropic prepares to challenge the government’s designation in court, its consumer popularity has surged; the Claude app recently hit the top of the charts as users rally behind the company’s ethical stance. 
AL

Aaron Leong

Tech enthusiast, YouTuber, engineer, rock climber, family guy. 'Nuff said.