Lapshin Ivan

Ivan Lapshin

Kotova Yuliya

Yuliya Kotova

Anthropic CEO Dario Amodei believes that upheaval from the expanding role of AI in the future is inevitable / Photo: Getty Images for HubSpot / Chance Yeh

Anthropic CEO Dario Amodei believes that upheaval from the expanding role of AI in the future is inevitable / Photo: Getty Images for HubSpot / Chance Yeh

The development of an advanced Mythos model capable of overcoming existing cyber defenses has only further convinced Anthropic CEO Dario Amodei of the need to regulate AI. Amodei gave an interview to the Financial Times amidst the anxiety of experts and investors, and in anticipation of Anthropic's IPO. Oninvest recounts his statements.

On Mythos threats and the regulation of AI

"I think we should look at regulating AI in the same way that cars and airplanes are regulated," Amodei says. - Everyone realizes they have tremendous economic value, but they have to be built very carefully. If they're not created correctly, they can kill."

Anthropic claims that its advanced Mythos model has identified thousands of previously unknown vulnerabilities in all existing operating systems and web browsers, with some of them existing for 27 years. The company has decided not to release the model to the public just yet. Mythos is only available to a limited number of partners as part of Project Glasswing. The goal is to identify and remediate potential threats in advance. Amodei suspects that open source models and Chinese developers will be able to replicate Mythos capabilities within six to twelve months.

Some analysts say the development of Mythos has given Anthropic state-level capabilities, notes the Financial Times. A dangerous situation would arise if any company or country controlled the technology alone, so Anthropic is working closely with others, Amodei notes. He sees the next steps to strengthen cybersecurity as creating a system of mandatory technology assessment. He suggests that some third-party organization, such as the industry-backed nonprofit Frontier Model Forum, could set standards comparable to automobile standards: "Does the car have brakes, does it have airbags, does it have seat belts?"

On shocks to the labor market

Amodei insists that AI companies need to recognize the economic shocks the technology will cause. According to him, the industry needs to achieve such strong positive effects from AI to override these shocks. Until that happens, people will question AI technology, according to Amodei. His principle is that AI can "only spread at the speed of trust."

In January, in his Adolescence of Technology essay, Amodei warned that AI could eliminate about 50 percent of all entry-level office jobs within five years.

About the bubble in AI

Despite talk that the AI bubble is about to burst, Amodei is convinced that the rise of the "Big Computing Coma," as he calls it, is far from over.

"A rainbow has no end. There is only a rainbow," he says. - We don't see any signs of slowing down."

A message to the rich

We are living in a new "gilded age," Amodei says, referring to the eponymous period of rapid growth in the US economy after the Civil War in the 1970s and 1980s. He says a small number of "incredibly lucky" billionaires (including himself) have been able to amass vast fortunes and have a responsibility to be more socially responsible. The head of Anthropic is particularly critical of those tech billionaires who get annoyed by "unfair" criticism in the press and then "buy the judge" by acquiring their own media outlets. Amodei declined to name specific names, the FT notes.

"We have an obligation to give back to society unselfishly. And society doesn't have an obligation to praise us for that," he says. - The press could say I'm torturing little puppies and I'd still have an obligation to society."

His tone irritates many critics in Silicon Valley, who note that his principles align with Anthropic's commercial interests, writes the FT. The fierce competition of shareholder capitalism will also dictate its inexorable logic, the publication notes.

This article was AI-translated and verified by a human editor

Share