Saifutdinova Venera

Venera Saifutdinova

Oninvest reporter
Anthropic rolls back some of its tough security restrictions for the sake of competition / Photo: Tigarto / Shutterstock

Anthropic rolls back some of its tough security restrictions for the sake of competition / Photo: Tigarto / Shutterstock

AI model creator Claude Anthropic, whose updates throughout February were the reason for the stock market sell-off, said it is relaxing key provisions of its security policy. The startup was previously known as one of the most security-oriented players in the AI market, notes the Wall Street Journal.

Antropic attributed the decision to the need to remain competitive with other AI labs.

Details

In 2023, as part of its Responsible Scaling Policy, the company promised to delay the development of potentially dangerous models that would aid in the development or proliferation of biological and chemical weapons, for example. However, in a statement published on Tuesday, February 24, Anthropic said it is updating the policy: the company will no longer automatically suspend the development of AI models if it believes it does not have a significant technological advantage over competitors.

"The regulatory environment has shifted to prioritize AI competitiveness and economic growth, while security-focused discussions have yet to gain meaningful traction at the federal level," the company said.

In its blog post on the changes, the company also said it will publish regular security goals and risk reports for its models, which will be evaluated by an "independent third party." This involves hiring third-party AI security experts who will have access to the full reports and publicly evaluate the company's reasoning and conclusions, Anthropic said.

What Anthropic is doing this for

The revision of Antropic's security rules looks unexpected, as the company has long tried to stand out from its competitors precisely by its tough stance on security, while competing with other players for leadership in AI, Bloomberg writes. Anthropic CEO Dario Amodei previously worked at OpenAI and left the company, including because of concerns that it relies on commercialization and speed at the expense of security, the agency points out.

However, amid the company's dispute with the U.S. Department of Defense, pressure on Anthropic may have intensified, The Wall Street Journal notes. According to WSJ, Anthropic had a defense contract with the Pentagon that allowed it to use Claude for analytical work and data processing, including in classified environments. At the same time, Anthropic itself has previously stated that the tools of its Claude AI platform cannot be used for internal surveillance and in autonomous weapons systems. The Pentagon had been negotiating with the startup for several months about how exactly the military could use Claude, Axios sources said. In the end, the defense department gave Anthropic until February 27 to relax the rules, otherwise Anthropic risks losing the contract, WSJ writes.

Until recently, Anthropic was considered the only AI model developer allowed to work in classified environments, the newspaper notes. However, on February 23, it became known that the U.S. Department of Defense also entered into an agreement with xAI, allowing the use of Grok in classified networks. In addition, OpenAI and Google also have agreements with the Pentagon.

That said, an Anthropic spokeswoman told The Wall Street Journal that the company's current security policy review was unrelated to the AI startup's negotiations with the military establishment.

Context

Anthropic faces stiff competition from companies such as OpenAI, Elon Musk's xAI, and Google, which regularly release cutting-edge AI tools, the Wall Street Journal noted. Anthropic and OpenAI, meanwhile, are considering an IPO as early as this year, looking to capitalize on high investor interest in AI. Anthropic's recent valuation was $380 billion, while OpenAI's total valuation, based on the funding the company has raised, reaches more than $850 billion.

Still, security concerns continue to plague employees at Anthropic and other AI companies. In recent weeks, several AI researchers have left Anthropic and other tech companies, warning that security concerns are taking a back seat to attracting billions of dollars in investment and preparing for possible IPOs, the WSJ reports. OpenAI, the developer of ChatGPT, and Google, among others, are facing similar situations, according to the publication.

Anthropic was founded in 2021 after Dario Amodei and other co-founders left OpenAI, believing the ChatGPT developer was not paying enough attention to security issues.

In 2022, Amodei decided not to release an early version of Claude, fearing it would trigger a dangerous technology race. A few weeks later, however, OpenAI introduced ChatGPT, and Anthropic had to catch up with its competitor. Amodei later claimed that he did not regret the decision.

This article was AI-translated and verified by a human editor

Share