Krasnova  Anna

Anna Krasnova

Advances in artificial intelligence pose a serious challenge to civilization, says Anthropic CEO / Photo: Chance Yeh/Getty Images for HubSpot

Advances in artificial intelligence pose a "serious challenge to civilization," says Anthropic CEO / Photo: Chance Yeh/Getty Images for HubSpot

The CEO of AI startup Anthropic, Dario Amodei, published a 19,000-word essay in which he warned that the development of artificial intelligence poses a "serious challenge to civilization" and it's "time for humanity to wake up" and realize this.

According to creator Claude, superintelligent AI could emerge in the coming years, which will be a "technological adolescence" - a period when humanity will gain unimaginable power. "It is unclear whether our social, political, and technological systems have the maturity to command it," Amodei writes.

Oninvest read his essay and selected key quotes about how AI could change global security and economics - and how to meet these challenges.

Risk of uncontrolled progress

The volatility of breakthroughs and gossip about AI "hitting a wall" hides the smooth and inexorable growth of its capabilities. At Anthropic, neural networks are already writing a significant portion of the code, so the creation of next-generation systems is accelerating. If this exponent continues, in just a couple of years AI will surpass humans in almost everything.

We may be just one or two years away from the point where the current generation of AI will autonomously build the next. The process is already underway, and it's only going to get faster from here. Watching the progress of the last five years at Anthropic, I can literally feel that clock ticking.

Simply shrugging and saying "there's nothing to worry about" would be foolish. But that's exactly how many politicians in the US view the rapid progress of AI. Some deny the risks at all, while others are always distracted by old, tired topics. It's time for humanity to wake up.

I believe that we can deal with these threats if we act decisively and carefully. I even believe that our chances are not bad, and that we have a much better world ahead of us. But we must understand that this is a major challenge for the entire civilization.

The risk of a digital revolt

Over the past few years, we've gathered a lot of evidence that AI is unpredictable and poorly controlled. We've seen it all: obsession, subservience, laziness, deception, blackmail and intrigue. Models even "cheat" by hacking into the software environment to make things easier for themselves.

The situation is very similar to the way people grow up. A child is raised on basic values like "do no harm." Some people internalize it, but there's always a chance that something will go wrong. We fear that an AI could become a super-powered version of such a person simply because of a glitch in its monstrously complex learning process.

After going through hundreds of scenarios with large-scale tasks, where the thirst for power helps to achieve results, the model "learns its lesson". As a result, it either develops a craving for dominance or begins to see seizing control as the most logical way to accomplish any task. At the same time, the neural network may deliberately play along to hide its intentions.

One of our main innovations - which other companies have already started to adopt - is "constitutional AI": when we set the character of a model, we use a kind of set of principles and values. It's like a letter from a deceased parent that a child opens when he or she reaches adulthood.

In parallel, we try to look "under the hood" of the AI to understand the logic of the system. We look for any signs of deception, intrigue, or power-seeking. We want to know if the system is prone to trickery and pretense during checks. If the constitution describes the character we want to see in the AI, then these checks are a way to see if that character has actually taken root.

The risk of destroying the world

It used to be that an insane loner could think about killing millions, but lacked the stamina or knowledge. What scares me is that AI can take an ordinary person and literally walk them through a highly complex process by the hand. With the help of a "genius in his pocket," anyone can become a doctor of virology, able to design, synthesize, and use biological weapons step by step.

Biology is my greatest fear. Its destructive potential is enormous, and it is incredibly difficult to defend against such a threat. But all of this applies equally to other risks: cyberattacks, chemical weapons or nuclear technology.

AI companies can put "safeguards" on models so they don't help create bioweapons. But any model can be hacked, so we need a second line of defense. We've implemented a special classifier that blocks any bioweapons data. This has increased our operating costs by almost 5%, which is a significant hit to our profits, but we're doing it deliberately. To be fair, some other AI companies have also implemented such classifiers. But not all of them have done so.

We may have to negotiate some rules with geopolitical adversaries. I'm usually skeptical of international cooperation in AI, but we have a chance in this area. Even dictators don't want large-scale bioterrorist attacks.

Risk of dictatorship

Authoritarian regimes can use AI for surveillance and repression in ways that make them nearly impossible to overthrow. Countries can use their AI advantage to subjugate others. A swarm of millions of drones under the control of a powerful AI can become an invincible army. It can defeat any enemy, crush any protest within a country. All of this leads to a frightening prospect: a global totalitarian dictatorship.

AI systems woven into our daily lives will be able to brainwash us for years, imposing any ideology. If now we are afraid of the influence of TikTok, imagine an AI agent that has been stealthily shaping your opinion for years. This is a weapon of a whole other level.

It sounds strange coming from the head of such a firm, but we are the next level of risk. We have the expertise, the hardware, and access to millions of users. We can influence people's minds to bypass any laws.

AI companies need close oversight. Companies should publicly declare, perhaps even include in their charter, that they will not create private armies, give computing resources to uncontrolled individuals, or use their products to manipulate public opinion.

We need to slow down the autocracies' progress toward the creation of powerful AI for a few years, depriving them of the necessary resources - first of all, advanced chips and equipment. This will give the democracies a head start that can be "spent" on cautious development of systems, paying maximum attention to risks, but still maintaining leadership.

AI must strengthen democracies so that they can stand up to autocrats. This is why Anthropic believes it is important to cooperate with the intelligence and defense agencies of the U.S. and its allies. Ultimately, the only way to respond to the threat of autocracy is to surpass it in military might.

We can use AI for national defense for any purpose other than making us look like our autocratic adversaries. Mass surveillance and state propaganda inside the country are "red lines" that cannot be crossed.

Risk of mass unemployment

In 2025, I warned that AI could replace up to half of entry-level office workers in the next year to five, even as scientific progress and the economy accelerate.

If AI hits finance, consulting and law at the same time, people have nowhere to "flow" to. Farmers used to be able to go to factories - it was a new but understandable job. AI is not replacing a specific profession, it is replacing human labor as such.

The problem is that it is not the professions that are being hit, but the cognitive level. Those with below-average intelligence risk joining the technological "underclass" with poverty wages. If computers have already increased inequality, AI is capable of bringing this gap to the absolute.

Businesses have a choice of how exactly to implement AI. You can go the lean route - do the same thing, but with fewer people. Or they can choose to innovate - to achieve multiple growth with the same people. The market will inevitably try both options, but we can encourage companies to evolve.

In the short term, retraining and staff rotation will help avoid mass layoffs. In the future, when productivity growth provides a huge inflow of global wealth, it will be possible to pay employees even when their labor ceases to bring traditional value.

A problem of this magnitude cannot be solved without macroeconomic measures. When the "pie" of the economy is huge but distributed extremely unevenly, the only logical response is progressive taxation. This could be general taxes or levies specifically on AI companies.

Risk of political-economic symbiosis

What scares me is how AI money is merging with politics. Data centers already generate a significant share of American GDP. The interests of technology giants and the government are tied in a dangerous knot: companies are afraid to criticize the authorities, while the latter support the complete deregulation of the industry. The AI industry needs transparent relations with the state. We need a dialog about the rules of the game, not political loyalty.

Right now, there is a maturing public outcry against AI, but the anger is hitting past the target. People are fixated on secondary topics - like water consumption by data centers - and demanding bans that change nothing. The real challenge is different: AI needs to be controlled by society, not serve a narrow alliance of politicians and corporations. That's what the public debate needs to focus on.

Macroeconomics and new philanthropy can restore the balance. Titans of the past like Rockefeller or Carnegie realized their responsibility: success is impossible without the contribution of the entire nation and sought to repay their debts. Those who today are at the forefront of the AI boom must be prepared to part with not only their money, but also their power.

The risk of human degradation

A world where billions of minds outnumber us in everything is a very uncomfortable place. Even if AI does not go to war against us or become a tool of tyrants, everything may collapse because of market or personal interests. The first signals are already there: psychoses, suicide cases, and a frightening attachment to algorithms. What if powerful intelligence creates a religion and recruits millions of adherents? What if humanity falls into total dependence on neural network communication?

Will humans find a place in a world of powerful AI? It's all about the inner mindset. The meaning of life is not just about being better than everyone else. We can find it in creativity, stories and doing the things we love. We have to break the link between income and self-esteem. It's a painful transition, and there's a risk that we won't make it.

Whether we will survive this test and build a beautiful society of the future, or whether we will perish in chaos and slavery, depends only on our spirit, our will and our soul. Despite all the obstacles, I believe that humanity has the strength to pass this test.

This article was AI-translated and verified by a human editor

Share