You just read a hilarious comment on Instagram. You laughed, liked it, replied with an emoji. But what if that comment wasn't written by a human being? What if the account that posted the original photo was also artificial? What if 60% of everything you see online — posts, comments, likes, shares — was generated by machines programmed to look human?
In March 2026, that question stopped being conspiracy theory. A bombshell report from Imperva, a cybersecurity company, revealed that 64.7% of total internet traffic in 2025 was generated by bots — not human beings. And on social media, the number is even more disturbing: it's estimated that between 30% and 45% of all active profiles on X (formerly Twitter), Instagram, and TikTok are operated by artificial intelligence.
The "Dead Internet Theory" — the theory that the internet is essentially "dead" and dominated by artificial content — was born as a marginal conspiracy theory on forums like 4chan in 2021. Five years later, it has become the elephant in the room that Big Tech companies are trying to hide.

The Original Theory: Where It Came From
The Forum That Predicted the Future
The Dead Internet Theory was first articulated in an anonymous post on the Agora Road forum in January 2021. The author, known only as "IlluminatiPirate," argued that the "real" internet — made by humans for humans — had died around 2016-2017, replaced by an ocean of algorithmically generated content.
The original pillars of the theory were:
- Most online content is generated by bots, not people
- Major technology corporations actively manipulate what is seen online
- Governments and intelligence agencies use the internet as a social engineering tool
- Real engagement (humans interacting with humans) is a minimal fraction of total traffic
At the time, the theory was widely ridiculed. Technology experts classified it as "digital conspiracism" and "information age paranoia." Five years later, the data shows that the "conspiracy theorists" were closer to the truth than anyone would like to admit.
The Numbers That Shock
Imperva 2026 Report: The Internet X-Ray
The "Bad Bot Report 2026" from Imperva, published in February, is the most comprehensive study ever conducted on automated internet traffic. Analyzing 18.7 trillion web requests in 2025, researchers concluded:
| Traffic Type | Percentage | Trend |
|---|---|---|
| Malicious bots | 37.2% | ↑ 12% vs 2024 |
| "Good" bots (crawlers, indexers) | 27.5% | ↑ 8% vs 2024 |
| Total bots | 64.7% | ↑ 10% vs 2024 |
| Real humans | 35.3% | ↓ 10% vs 2024 |
Translation: out of every 3 interactions on the internet, only 1 involves a real human being. And the trend is worsening — in 2020, humans represented 59% of traffic. In just 5 years, the proportion has inverted.
Social Media: The Epicenter of the Problem
The landscape on social media is particularly alarming. A study by Indiana University, published in Science in January 2026, conducted the largest audit ever performed on social media profiles:
X (Twitter):
- 35-42% of active profiles are operated by bots
- 78% of tweets about political topics come from just 0.3% of accounts (predominantly automated)
- X's own bot detection rate is only 47%
Instagram:
- 28-35% of profiles with 10K+ followers use partial or total automation
- 45% of comments on celebrity posts are generated by bots
- Automated "engagement pods" artificially inflate the reach of 1 in 4 viral posts
TikTok:
- 22-30% of videos recommended by the "For You" algorithm are produced by accounts with automated behavior
- Bot farms in China and Vietnam operate millions of accounts that produce seemingly original content
- 67% of comments on videos with over 1 million views contain linguistic patterns associated with AI

How Bots Got So Good
The GPT Revolution and Its Descendants
The qualitative leap of internet bots has a clear milestone: the launch of ChatGPT in November 2022. Before advanced language models, bots were easily identifiable — repetitive texts, grammatical errors, generic responses.
In 2026, the reality is radically different. Models like GPT-5, Claude 4, and Gemini 2.0 produce text indistinguishable from humans, including humor, irony, regional slang, and even intentional typos to appear "more human."
Image generation tools like DALL-E 4 and Midjourney V7 create photos of "people" who never existed — faces, bodies, life scenarios — with resolution and realism that fool even facial detection algorithms. Sora 2.0 from OpenAI and Veo from Google produce videos up to 5 minutes long with "real people" talking, gesturing, and interacting in convincing settings.
Bot Farms: The Billion-Dollar Business
In Shenzhen, China, an 8-story building houses one of the world's largest "bot farms." Thousands of smartphones are organized on metal shelves, each running dozens of social media accounts simultaneously. The business generates an estimated $4.2 billion per year globally.
The business model is simple:
- Fake followers: 10,000 Instagram followers for $89
- Engagement: 1,000 "natural" comments for $45
- Political manipulation: Coordinated disinformation campaigns from $5,000/month
- Market manipulation: Fake reviews in bulk for $2 per review
Real-World Impact
Democracy and Elections
The Oxford Internet Institute's March 2026 report identified coordinated bot campaigns in 84 countries during elections held between 2024 and 2025. Bots create artificial trends by making political hashtags go viral, fake profiles pose as real citizens to simulate "popular outrage," and deepfakes of candidates are distributed by bot networks hours before voting.
Mental Health and Perception of Reality
The psychological effects of the bot-dominated internet are profound. MIT psychologist Sherry Turkle coined the term "algorithmic loneliness" to describe the phenomenon: "People spend hours conversing online thinking they're connecting with other humans, when in reality 40% of those interactions are with machines. The result is a growing sense of emptiness — you're socializing, but you're not."
A Journal of Social Psychology (2026) study found correlation between bot-generated content exposure and a 28% increase in social anxiety indices, 15% reduction in trust in institutions, 34% increase in conspiracy theory belief, and 22% reduction in willingness to participate in democratic processes.

What Big Tech Says (and What They Hide)
The major tech companies minimize the problem. Meta claims to have "removed 2.7 billion fake accounts in 2025" — but critics point out that new accounts are created faster than they're removed. Elon Musk promised to "eliminate bots" when he bought Twitter in 2022. In 2026, the bot percentage is higher than when he took over.
Google admitted in an internal report (leaked in January 2026) that 52% of content indexed by its search engine in 2025 was AI-generated. The fundamental paradox is that platforms have no financial incentive to solve the problem. Bots generate traffic. Traffic generates ad impressions. Impressions generate revenue.
Solutions Under Discussion
Humanity Verification
Projects like WorldID (from Worldcoin) propose universal biometric verification — scanning everyone's iris to create a digital "proof of humanity." By 2026, WorldID has 8 million registrations, but it generates controversy about privacy.
Regulation
The EU's Digital Services Act requires platforms to publish quarterly reports on automated account removal, clearly label AI-generated content, and faces fines up to 6% of global revenue.
Decentralization
Decentralized platforms like Bluesky, Mastodon, and Nostr offer alternatives with more transparent identity verification and less algorithmic control. However, they still represent less than 2% of the social media market.
FAQ — Frequently Asked Questions
Is the Dead Internet Theory a conspiracy theory?
In its original form (2021), it contained conspiracist elements. However, the core of the theory — that bots dominate most online traffic — has been confirmed by scientific data in 2025-2026.
How can I tell if I'm interacting with a bot?
In 2026, it's extremely difficult. Signs include: overly generic responses, recently created profile, too-perfect profile photo, 24/7 posting pattern, and disproportionate engagement on political or commercial topics. Tools like Botometer have only 73% accuracy.
Can I trust information I find online?
The golden rule is: verify multiple sources. If information appears only on social media but not in established news outlets, there's a high probability it's bot-generated content for manipulation purposes.
Sources and References
- Imperva. "2026 Bad Bot Report: The Rise of AI-Powered Threats." February 2026.
- Science. "Auditing Social Media Bot Prevalence Across Platforms." Indiana University. January 2026.
- Oxford Internet Institute. "Computational Propaganda: Global Trends 2025-2026." March 2026.
- MIT Technology Review. "The Dead Internet Theory Was Right." March 2026.
- Journal of Social Psychology. "Psychological Effects of AI-Generated Social Media Content." 2026.





