Krasnova  Anna

Anna Krasnova

The Optimism Trap: Howard Marks on where the market is misguided about the bubble

Investors shouldn't wonder whether the market is overheated or not yet, Oaktree Capital co-founder Howard Marks writes in a new note titled Is It a Bubble? Instead of searching for an answer, he advises recognizing this overheating as a necessary historical milestone. According to Marks, a speculative bubble is the only way to fund the infrastructure of the future, even if investors have to pay for it with their own money.

Anatomy of mania: why novelty turns off caution

Marks writes that the very question "is there a market bubble?" contains a trap. Investors often perceive bubbles as a failure of the system, but Marks believes it is a natural market reaction to novelty. "Because there is no historical data to constrain investor imagination, the future of a new technology can seem limitless," Marks writes, "and this can justify valuations that go well beyond past norms - leading to asset prices that are not justified on the basis of predictable returns.

All bubbles develop according to the same script, whether it's railroads in the 1860s or the Internet in the 1990s. Revolutionary technology captures the imagination, early investors make super profits, and the rest of us, driven by envy and fear of missing out, rush into the market, ignoring the risks.

"Market memory is short. Prudence and natural risk aversion cannot compete with the dream of getting rich on a technology that 'everyone knows' will change the world"

Говард Маркс

And in the case of AI, this scenario is repeated with frightening accuracy: if enthusiasm doesn't lead to a bubble now, it will be a violation of all historical laws.

What's happening in the AI market right now

Marks identifies several key trends that now define the AI market and show that it is overheated. The impact of AI has become total - the sector accounts for 75% of the growth in the S&P 500 index and 90% of all corporate capital spending. The effect of Nvidia growing 8,000 times in 26 years has "fired the imagination" of the crowd, leading them to believe that this success can be scaled unlimitedly.

But Marks is not so much concerned with the growth figures as with the quality of this money. He points to the troubling symptom of "circular deals" reminiscent of the dot-com crash. Bigtechs pour billions into startups like OpenAI, which immediately return that money to investors by paying for their cloud capacity. OpenAI has already made $1.4 trillion in commitments, planning to cover them with future revenue from the same partners. Marks wonders if Silicon Valley is trying to invent a financial perpetual motion machine by creating the illusion of revenue out of thin air. Besides, technological progress is so rapid that equipment may become obsolete before it pays for itself. Will expensive chips have enough life left in them to pay back the billions invested in them, Marks asks.

The situation is also unique in that the AI industry is not about selling a finished product. It's "building an airplane right in the air." Companies are spending trillions to create general artificial intelligence (AGI) without knowing exactly how to achieve it and, most importantly, how to monetize it.

Genie out of the bottle: why AI can't be calculated from old models

Trying to assess the prospects for AI is bogged down by what Marks calls "speculative" behavior in the literal sense - it's betting on a future no one knows. "The genie is out of the bottle, and it's not going to come back in," the investor states.

The speed of technology evolution breaks any prediction model. Marks cites the example of programming, which has become the "canary in the coal mine": in just one year, AI has gone from an assistant to the level of a world-class developer. In such an environment, it's impossible to build financial models. "Adoption today may have nothing to do with adoption tomorrow, because in a year or two, AI may be able to do 10 or 100 times as much," Oaktree consultants explain. In such an environment, it is impossible to calculate the real need for infrastructure. Investment becomes an equation with all the unknowns.

"The Minsky Moment": when debt becomes toxic

The uncertainty of the future itself wouldn't be a disaster if the AI race was funded solely with its own or shareholders' money. The danger comes from growing debt, Marks writes.

The market is entering a phase that economists call the "Minsky moment". It is a turning point, when quality projects for investments are running out, but the credit tap is not closed. Money starts flowing into risky, marginal deals. Marks sees warning signs reminiscent of the Enron collapse: the use of special project companies to hide debt off the balance sheet, and strange schemes where startups borrow money to buy equipment from other startups.

Explaining the dangers of such schemes, Oaktree co-director Bob O'Leary says: "AI is a winner-take-all game. If you are a shareholder, you can invest in 10 companies. Nine will go bankrupt, but one will become the new Google and grow 1,000-fold. The growth will cover all the losses and generate super profits. But this logic doesn't work if you're a lender: you'll only get a fixed percentage of the winner. The rest will not pay back the debt.

This is why Marks draws a hard line between risk and gambling. "It is acceptable to lend when the outcome of an enterprise is simply uncertain. But it is unacceptable when the outcome is a matter of pure guesswork," the investor summarizes. Financing revolution through debt is an attempt to apply the tools of stability to chaos, and historically it always ends badly.

"The Nvidia of its time": the trap of inevitability and the reality of the numbers

Psychologically, the current AI boom is most reminiscent of the 1920s, Marks says. Back then, radio and aviation were the breakthroughs: Lindbergh's flight across the Atlantic caused as much excitement as the launch of ChatGPT. Investors got the trend right - technology changed the world - but lost 90% of their capital because they overestimated the speed of adoption. Marks warns that AI is also likely to go through a painful correction of excessive expectations.

However, optimists have a strong trump card: financial reality today is healthier than in past bubbles. Unlike the Internet companies of the past, the leaders of the AI race are already generating huge cash flow. Marks cites Anthropic (10x revenue growth in a year) and Cursor (jumping from $1 million to $100 million) as examples, showing that there is real solvent demand behind the hype. Moreover, the current giants cost less than their predecessors. For example, Microsoft today is valued by multiples twice as cheap as 26 years ago, and the main "anti-hero" of the dot-com bubble, Cisco, at its peak cost more than today's Nvidia.

A useful explosion: why bubbles are essential

Marks believes that bubbles, because of their psychological and technological underpinnings, are inevitable but not necessarily bad. He proposes to divide bubbles into two types:

- Mean-reversion bubbles: financial collapses like the 2008 mortgage crisis that simply burn money.

- Inflection bubbles: bubbles based on technological revolutions that also burn money but accelerate progress and set the stage for the future.

Artificial intelligence is clearly the second case. Speculation provides massive, "irrational" funding for infrastructure that could not have been built using only the dry logic of profit. Many investors will lose money, Marks writes, but what appears in the short term to be over-enthusiasm or simply a bad investment will turn out to be essential to the establishment of social and technological innovation.

"The key is not to be among those investors whose wealth will be burned in the furnace of this progress."

Говард Маркс

"For those who believe": final diagnosis and strategy

In the dry balance, Marks returns to the diagnosis formulated by former Fed chief Alan Greenspan - "irrational optimism". The investor has no doubt that the market is overflowing with enthusiasm, but diagnosing its irrationality in the moment is extremely difficult. Given the unknown variables, it's impossible to accurately predict the moment of a crash. "We can theorize about whether the current enthusiasm is excessive, but we will only know if it was, years from now," Marks writes.

The main belief of the defenders of the current rally is the phrase "this time is different". Marks admits that the famous investor John Templeton estimated the probability of such an outcome at 20%, but warns that it is blind faith in the exceptional moment that most often leads to failure. Comparing investors' attitude to AI (as well as to gold and cryptocurrencies) to religious faith, he quotes economist Stuart Chase: "For those who believe, no proof is needed. For those who don't believe, no proof is possible."

Investors will have to accept a difficult dilemma: there is no safe way to capitalize on the revolution. Leaving the market entirely means missing out on the greatest technological shift of all. To gamble everything is to take a big risk. Marks advises a moderate stance with strong selectivity: "Intelligent investments in data centers, and therefore in AI - as in everything else - require sober, astute judgment and skillful execution."

P.S. A little personal: what really disturbs Howard Marks

The market celebrates the efficiency gains of companies due to AI adoption, while Marks warns of a fundamental macroeconomic impasse. He calls artificial intelligence a "labor-saving device" and points out the illogic of optimists: if fewer people are needed to produce the same GDP, who will consume the goods produced? Job cuts, according to the investor, threaten a downward spiral in demand that cannot be compensated for simply by faster production.

The investor fears that automation will wash out not only manual labor, but also the initial stages of intellectual careers - junior lawyers, analysts, copywriters. This may create a long-term structural risk: where will experienced professionals capable of solving complex tasks that require judgment and understanding of hidden patterns come from in 10 years?

"It's hard for me to imagine a world in which AI labors side by side with all the people with jobs today," Marks admits bitterly. The concentration of incredible technology in the hands of a narrow group of elites amid a massive loss of meaning and work for millions could set the stage for a social explosion that would be worse than any drop in the S&P 500 index.

However, Marks also leaves room for an optimistic view that hides in demographics: about 16 million baby boomers will retire by 2035, and perhaps AI will not be a job killer, but a tool that will compensate for this departure by closing the resulting talent gap.

This article was AI-translated and verified by a human editor

Share