Buffett's escalator: the Big Short investor on the bubble, the Nvidia problem and the AI apocalypse

Michael Burry, who predicted the 2008 crisis and is now betting on Nvidia's falling stock price, has for the first time detailed his reasons for being skeptical of AI. In correspondence with Anthropic co-founder Jack Clark and podcaster Dwarkesh Patel, the investor described his vision of the sector's capitalization, spoke out about the hyper-scaler payback trap and shared his thoughts on a possible AI apocalypse. Oninvest publishes excerpts from Burry's remarks.
What is the market getting wrong?
Burry: Historically, profits have gone to those who have a sustainable competitive advantage, either dictating prices to the market or leading on costs. It remains to be seen whether current investments in AI will lead to this outcome.
So far, the implementation of most AI solutions will follow Warren Buffett's "escalator" scenario. In the late '60s, Buffett was forced to put an escalator in his department store just because a competitor did. As a result, costs went up for both, but margins went up for neither. Now companies are spending trillions on AI, but are not gaining an advantage because competitors are adopting the same technology.
And the market is most wrong about the two main symbols of the AI boom - Nvidia and Palantir. They are just two incredibly lucky companies that didn't create products specifically for AI, but successfully adjusted to the boom. However, Nvidia's advantage doesn't last forever: the future lies in small-scale models (SLMs) and specialized chips (ASICs). Right now, Nvidia is just a costly "draft" solution that is simply holding the line until competitors with a different approach emerge.
As for Palantir, their CEO's attacks on me are not the behavior of a confident leader, but an attempt by marketing to retain control. If you honestly subtract employee stock-based compensation (SBC), the company has virtually no profit.
In Buffett's escalator example, the consumer won in the end. This is always the case if the producer cannot extract monopoly rents.
Why will the AI hit the ceiling?
Burry: Ultimately, someone has to buy artificial intelligence. The money that people and companies pay for goods or services is the real size of the entire economy, and it is growing at only 2-4% per year. Only those who dictate prices alone can hope for a jump above this bar, but the AI sector is clearly not going to get such a "boost" in the future.
Economics is arithmetic, not magic. The entire global software market doesn't even reach a trillion dollars. So I go back to the ratio of infrastructure costs to application revenue: Nvidia sells $400 billion worth of chips for the sake of less than $100 billion in end-application revenue.
So far the entire cost cycle is being kept on pure faith and FOMO. No one can show numbers to prove ROI. On the contrary, AI may start to squeeze the market: if a $50 solution replaces a $500 license, that's great for efficiency, but it's a deflationary blow to industry revenues. In the end, the benefits of new technologies will simply be washed up between all competitors.
Why could AI giants repeat the fate of the dotcoms?
Burry: Before you claim "record" performance, you need to look closely at the full value of stock-based compensation (SBC). At Nvidia, about half of profits simply go to employees - in effect, capital flows from shareholders to employees. If every second person on staff is "worth" $25 million, what's the efficiency in that? If these expenses were properly accounted for, the real margin would be many times lower.
The main indicator is ROIC (return on capital), because it is its vector that predicts market trends. The software business has always been highly profitable, but now companies are turning into capital-intensive hardware manufacturers. This will inevitably bring down ROIC and, consequently, quotations. This downward trend will continue until 2035.
ROIC dynamics is an indicator of how much real growth potential a company has left. I have seen many examples of aggressive acquisitions where businesses grow by buying up other firms with borrowed money. In such cases, ROIC becomes a ruthless moment of truth: if the return on that investment is less than the cost of servicing the debt, the company fails. That's exactly how WorldCom collapsed.
At some point, the return on AI infrastructure must exceed the value of the investment itself, otherwise no economic value is created. If a company grows only because it has taken out loans or spends its free cash flow on low-yielding projects, it ceases to be attractive to investors, and its market multiplier will inevitably fall.
Where does the money go?
Burry: In my opinion, this boom is really different from previous booms - and first of all, because the lifespan of capital investments is amazingly short. The chip refresh cycle has become an annual cycle, and today's data centers simply won't be able to handle the hardware that will be available in a couple of years. It can even be argued that most of these expenses should be written off immediately to current expenses, rather than capitalized.
Another important difference is that this boom is being financed by private credit almost more actively than by the public markets. Private credit is a murky area, but there is a clear maturity gap: many assets are packaged in securities as if they will last twenty years, even though hyperscalers are given the right to "exit" in as little as four or five years. This is a direct path to problems and "dead" assets on balance sheets. Sure, the richest companies in the world are spending, but huge spending is always huge spending. The amount of planned spending is already beginning to overwhelm the balance sheets and cash flows of even today's hyperscaler giants.
Also, "construction in progress" is an accounting trick that I think is already in full use. Equipment that has not yet been officially "put into operation" is not depreciated and does not reduce paper profit. And it can hang in that status forever. My guess is that a lot of illiquid assets will be hidden in this "work in progress" just to avoid messing up profit margins. It looks like we are already seeing that.
We are now in the middle of the investment cycle. The point where the market rewarded companies simply for growing capacity is over. We are entering a phase where the real costs and lack of returns will show up in full glory. In past cycles, markets peaked midway through. The rest of the investments were made when the shadow of pessimism - or, more accurately, sober realism - fell over assets.
What would make Burry change his prediction?
Burry: The main surprise that would make me rethink my thesis would be the emergence of autonomous AI agents displacing millions of employees from major corporations. That would shock me, but it wouldn't necessarily explain where the sustainable competitive advantage lies here. Another factor would be the growth of revenue in the application software segment to $500 billion or more due to the massive emergence of killer apps.
We'll see one of two scenarios: either Nvidia chips will last five or six years at a time, in which case demand for them will simply collapse, or they'll only last two or three years - in which case hyperscalers' profits will plummet and the private lending market will be destroyed.
What does the U.S. need to do?
Burry: If I could address the country's leadership, I'd suggest we send a trillion dollars (since it's being spent so recklessly now), brush aside all the protests and bureaucracy, and stack the entire country with small nuclear reactors - to build a new state-of-the-art energy grid.
This project needs to be implemented as soon as possible - and provide an unprecedented level of cybersecurity and just plain safety. Perhaps we should create a special federal Nuclear Defense Force to guard each such facility. It's the only way to get the kind of energy that will help win the race with China. And it is our country's only hope for the kind of economic growth that will allow us to eventually pay off the national debt, guarantee security, and make sure that energy shortages are no longer a brake on our innovations.
How does Burry himself use AI?
Burry: I do all my graphs and tables through Claude. I find the raw data myself, but I no longer spend time on layout or design. I still don't trust the numbers and have to double-check them, but the "creative" part of this work is a thing of the past for me. In addition, I use Claude specifically for sourcing material, since a lot of important data today is not limited to SEC reports or the official press.
A lot of people say that working professions are AI-proof. I'm not so sure about that at all, I see how many things I can do around the house now, just by having Claude at my fingertips. If a person is billed $800 to call a plumber or electrician, they may well try to take care of the issue themselves with Claude. I love that I can just take a picture of the breakdown and know right away how to fix it.
Will there be an AI apocalypse?
Burry: The AI we see today doesn't seem to me to be a threat to humanity at all. I think chatbots can just dumb people down: doctors who rely on them too much start to forget their basic professional knowledge. That's bad, but it's not a disaster.
All those scary stories about powerful AI don't scare me. I grew up in the Cold War era, when the world could blow up at any second. In high school, we were always being chased to drill drills. I was playing soccer when helicopters were spraying pesticides over our heads. "Terminator" I saw 30 years ago, and the plot of "Red Dawn" seemed quite possible. I'm sure people will adapt.
This article was AI-translated and verified by a human editor
