How invisible robot wars are changing the internet

The AI revolution is rapidly changing many sectors of the economy, including the very internet in which it originated and without which it would probably not have been possible. Recently, articles have become popular about how the internet, at least as we've known it, is "dying" or even already "dead". Strong statements are good for headlines, but I'd rather say that the internet is reinventing itself under the influence of AI, and this process holds both dangers and opportunities.
The "dead internet" theory
Talk of the "death of the internet" is by no means new. "Dead Internet" is a fairly popular conspiracy theory that has been circulating on 4Chan forums (a precursor to the QAnon movement) since the 2010s. It gained new momentum even before the AI revolution, in 2021, when user IlluminatiPirate posted a long post about it on the Agoraroad forum. The gist of the theory is that the vast majority of internet traffic, messages, and users have been replaced by bots and content created by artificial intelligence - we're actually no longer interacting with live humans, but seeing mostly generated content created by corporations and secretive influencers to promote products and ideas, explains Forbes. Of course, the emergence of ChatGPT in 2022, marking the beginning of the modern AI revolution, has only strengthened proponents of this theory in their suspicions. In 2024, a post appeared on the X network comparing the sound of the Kazakh language to "the sound of a diesel engine trying to start in winter." Except that the video was uploaded without sound, which did not prevent the post from becoming popular and receiving tens of thousands of likes and thousands of reposts, Forbes writes. X users decided that it was the work of bots, giving rise to a new wave of discussion of the "dead Internet" theory.
In fact, the Internet is alive, the magazine reassures us. Most of the posts and comments that go viral - non-standard, witty, paradoxical - cannot be created by generative AI: they are written by humans (I would add from myself - for now, and then, who knows).
However, as with any tenacious conspiracy theory, there is some truth to the "dead internet" theory. "Today's internet is far more barren than the wild and unpredictable internet of the past, as a diverse ecosystem of small user-generated sites has been replaced by a handful of huge platforms created by large corporations that seek to monetize our browsing and information sharing, often at the expense of the user experience," Forbes concludes.
In 2025, the problem seems to have only gotten worse.
"Drunkenly edited Wikipedia."
That's how Evan Robertson, author of a column in Vice, characterized Google's "AI reviews" launched last May. Now the world's premier search engine, instead of the famous "10 blue links," responds to some questions with a short answer on the topic, generated by its artificial intelligence Gemini under the slogan "Let Google search for you." Not that these reviews are as bad as Evan writes (although they do rely on Wikipedia data at times). A year after their launch, another problem is becoming clearer: because the review appears at the top of the search page, many users appear to be satisfied with that response and don't follow the links further. Thus, those sites from which the AI took the information are deprived of human traffic, and therefore the opportunity to monetize it through advertising. Simply put, AI takes away content, audience, and money without giving anything in return.
In the UK, technology justice group Foxglove, the Alliance of Independent Publishers and the Open Internet Movement have already filed a complaint against Google's reviews with the Competition and Markets Authority. They argue that a site that previously ranked first in search results could lose 79% of its traffic for that query if the results are given after an AI review, writes The Guardian. A Google spokesperson responded by saying that the data was "inaccurate and based on faulty assumptions and analysis."
However, the publishers' difficulties are not limited to Google alone. Other companies, such as AI search engine Perplexity and, in particular, the rapidly growing ChatGPT, are also taking the traffic that used to come through search. According to a study published by prominent analyst and technology investor Mary Meeker, the latter already has 800 million users, and it serves 365 billion search queries a year. That's still an order of magnitude less than Google's (about 5 trillion queries per year), but ChatGPT achieved that result in just two years, whereas Google once took 11 years to do so. That means ChatGPT is growing 5.5 times faster, Meeker writes. Similarweb, which tracks traffic from more than 100 million web domains, estimates that global search traffic (on the user side) dropped by about 15% in the 12 months ending June 2025. Healthcare sites were particularly hard hit, with a 31% drop, wrote The Economist in an article eloquently titled "AI is killing the internet."
"The nature of the internet has completely changed. Artificial intelligence is actually cutting off traffic on most content sites," the magazine quoted Prashant Chandrasekar, CEO of Stack Overflow - a prominent online forum for programmers. He too has experienced a severe drop in traffic.
Major media companies are fighting back by entering into content licensing agreements with AI search providers or filing lawsuits against them, like News Corp. (which owns the New York Post and The Wall Street Journal). The corporation agreed with ChatGPT developer OpenAI, and is now suing Perplexity. However, what about the millions of smaller sites that are actually the Internet? They don't have the resources to sue, and AI providers aren't interested in them as partners. Then another struggle begins in the network.
Robot Wars
In fact, the proponents of the "dead Internet" theory were catching bots not where their realm really lies. Even now, about half of Internet traffic is not people, but special robot crawlers that search websites for a wide variety of information. First of all, these are search engine crawlers that scan sites for more relevant search results. Amazon, for example, monitors shopping deals to offer adequate prices, and travel aggregator Kayak uses them to build itineraries for you. Crawlers are also used to gather relevant information by various social and scientific organizations, cybersecurity firms, Internet archivers and many other useful applications, writes MIT Technology Review. Until recently, sites treated crawlers tolerantly, because they "in exchange" for information returned traffic to the sites through referral links published on other sites. All this together formed a kind of ecosystem that more or less successfully worked and thanks to crawlers united the Internet into a whole;
Now there are information-hungry AI system crawlers on the web that suck data for AI training and for AI search, which, as discussed above, does not bring visitors to the site. Websites try to combat this by prohibiting in their terms of use the collection of information without permission and prescribing prohibitions for crawlers in a special robot.txt file. However, these restrictions are easy enough to ignore or bypass, notes MIT Technology Review. One of the most egregious cases became public last year: crawler Claude - AI from the company Anthropic - visited the pages of FixIT site more than a million times in one day. The site accumulates user instructions and recommendations for repairing a huge number of different appliances and contains millions of pages;
"Hi Anthropic. I understand you're hungry for data - Claude is really smart! But do you have to knock on our server a million times a day? You're not just using our content for free - you're also overloading our resources. That's not the way it works," an outraged FixIT CEO Kyle Vines wrote on Web X.
As a result, large web publishers, forums and sites often raise a drawbridge for all crawlers - even those that pose no threat. The Internet risks ceasing to be a unified space, breaking up into several "realms" owned by large media companies and contracted AI developers. It will become increasingly difficult for the average user to navigate the web without encountering endless requests for authorization, subscriptions, or captchas. New rules of the game are needed.
"Unless we can create an ecosystem with different rules for different uses of data, we may face strict boundaries on the internet, which will require sacrifices in terms of openness and transparency," MIT Technology Review summarizes rather bleakly.
Bot trap
However, as we know, where there is a problem, there is bound to be a startup to solve it, and, as a rule, more than one.
In July, one of the largest infrastructure providers for the Internet - Cloudflare - presented its own solution: a special tool that will allow website owners to charge crawlers (or rather, their hosts) for scanning content. Cloudflare takes care of automatic processing of requests from bots, payment and distribution of money among publishers. The function is still at the stage of closed beta testing, but the company sees many ways to develop this mechanism, such as determining different prices for different content or regulating it depending on the "demand" of bots for specific information. As AI agents - relatively independent systems that perform tasks on behalf of humans - evolve, another mechanism could be enabled. "Imagine asking your favorite research program to help you synthesize data on the latest cancer research, write a legal opinion, or simply find the best restaurant in Soho, and then giving that agent a budget to spend on acquiring the best and most relevant content," the company wrote on its blog.
At the same time, the company offers a instrument "punishment" for bad bots that do not want to work according to the rules of the site: a special generated "AI maze" in which the crowler will endlessly wander, wasting the resources of its "master" and not receiving any useful information.
"We have to set the rules of the game, a world where people get content for free and bots pay huge amounts of money for it," quotes Cloudflare owner Matthew Prince as quoted by The Economist.
The startup Tollbit, founded by Olivia Joslin and Toshit Panigrahi, offers a similar solution. It has already received more than $30 million in venture funding last year. The development allows content sites to set different rates for AI bots. For example, a magazine can charge more for new articles than old ones. In the first quarter of this year, Tollbit processed 15 million such microtransactions for 2,000 content producers, including the Associated Press and Newsweek, The Economist reported.
Bill Gross's startup ProRata offers a slightly different approach. It has developed its own AI search engine, Gist Search. It not only generates an answer, but also automatically determines how many percent of that answer came from which site. Next to the answers it plans to place advertising, the revenue from which will be shared proportionally to the contribution of each site. On this basis, it proclaims itself "the only ethical AI search engine". You can try it yourself. For the query "best sneakers" it offered me several options, at the end it promoted New Balance with an honest note sponsored. And it also showed a percentage breakdown - how much information from which site was taken for the review. Who knows, maybe the future of the internet looks like this. At least that's what some people believe. According to the Financial Times, British media group DMG Media late last year acquired a stake in ProRata based on the startup's total valuation of $130 million. The Financial Times reported that DMG Media, a British media group, had acquired a stake in ProRata late last year;
This article was AI-translated and verified by a human editor
