Osipov Vladislav

Vladislav Osipov

The Pentagon may cut ties with Anthropic. Whats the risk?

The Pentagon is close to severing its partnership with AI startup Anthropic and may recognize the company as a supply chain risk, Axios reported. The Defense Ministry is unhappy with restrictions on the use of Anthropic's technology for military purposes, the publication explains.

Details

The Pentagon has been in talks with the startup for several months about how exactly the military can use the Claude tool, Axios writes, citing a source. Anthropic is prohibited from using its AI for mass surveillance of citizens or for developing weapons that can be used without human involvement, the article says. The Pentagon considers this an overly restrictive approach and points to numerous gray areas that would make working within such a framework impractical. In talks with Anthropic and three other major AI labs - OpenAI, Google and xAI - Pentagon officials insist that the military can use their tools "for all legitimate purposes," Axios writes.

"The Department of War's relationship with Anthropic is under review," Pentagon spokesman Sean Parnell told Axios. - Our country demands that our partners be prepared to help our warfighters win in any conflict. Ultimately, this is about our service members and the safety of the American people."

What's the risk

If an AI startup is recognized as a supply chain risk, any company wanting to do business with the military will have to cut ties with Anthropic, Axios reports, citing a senior Pentagon official. "It's going to be a hell of a painful unraveling, and we'll make sure they pay the price for forcing us to do this," Axios quoted him as saying. The portal notes that such sanctions are usually applied to foreign adversaries.

An Anthropic spokesperson told Axios that the company is in "productive good faith discussions" with the Pentagon and is committed to using AI for national security.

DOD has already deeply integrated Anthropic software

Last year, Anthropic entered into a two-year agreement with the Pentagon to use the Claude Gov and Claude for Enterprise prototype models. Negotiations with Anthropic could set the tone for discussions with OpenAI, Google and xAI, which are not yet being used to handle classified material, Axios notes. Right now, Anthropic's Claude is the only AI model available in classified military systems and Pentagon officials are praising the product's capabilities, the publication emphasizes.

According to Axios, Anthropic's software is already integrated into the military and Claude was used in a raid against Maduro in January.

This article was AI-translated and verified by a human editor

Share