Deepfake videos, crypto scams and bizarre images of Jesus Christ as a giant shrimp or with a body made of ramen noodles may seem like unrelated internet oddities. But they all tie into a sprawling conspiracy theory that’s been circulating on web forums for years: the “Dead Internet” theory.
The theory posits that much of the internet is an artificial construct, consisting mostly of bots and AI-generated content curated by algorithms to boost traffic, shape public perceptions and serve corporate and even government interests. Proponents argue that what appears to be organic user activity — comments, views, likes, posts — is actually driven by artificial intelligence, creating the illusion of a bustling, human-made online world where there isn’t one.
Dead Internet Theory Definition
The Dead Internet theory is an online conspiracy theory claiming that artificial intelligence and bots generate most of the content and activity on the web. While false, the theory showcases growing concerns about how AI shapes the way we experience and interact with the internet.
While the internet has its fair share of bots and artificially generated content (often referred to as “slop”), it certainly isn’t “dead” — at least not yet. What we experience online is still largely powered by real human creativity and discourse. But this dynamic may be shifting with the rise of generative AI, which can create human-like text, images, video and audio in a matter of seconds. These creations are saturating the web, sparking renewed interest in the prospect of the internet eventually being taken over by machines.
“Generative AI has changed the game, where now it’s so easy to create a very convincing argument or narrative,” Joseph Jones, a media and AI researcher at the University of West Virginia, told Built In. “I think the dead internet theory is wrong overall, but could be where the internet is going.”
What Is the Dead Internet Theory?
The Dead Internet theory asserts that much of the activity we see online is carried out by artificial intelligence, not humans. Automated systems generate content, which bot accounts then amplify with likes and comments, boosting its visibility in the algorithms of social media platforms and search engines. Click farms monetize the whole operation by posing as real users, creating the illusion of genuine user engagement to drive up advertising revenue. As a result, human-made content is pushed to the margins, leaving a highly curated online experience that appears real but is largely machine-made.
The goal, according to the theory’s proponents, is to maximize corporate profits by amplifying sponsored or algorithm-friendly content. Some people also believe this is part of a larger government conspiracy to manipulate public perception, spread propaganda and suppress dissenting viewpoints.
“The U.S. government is engaging in an artificial intelligence powered gaslighting of the entire world population,” one user wrote in a lengthy thread on Agora Road’s Macintosh Cafe, a central forum for discussions about the Dead Internet theory. “There is a large-scale, deliberate effort to manipulate culture and discourse online and in wider culture by utilising (sic) a system of bots and paid employees whose job it is to produce content and respond to content online in order to further the agenda of those they are employed by.”
Where Did the Dead Internet Theory Come From?
Ideas of a “dead internet” have been circulating on forums like 4chan and Reddit since the mid-2010s, but the theory really took off in 2021 with a post titled “Dead Internet Theory: Most of the Internet is Fake” on Agora Road’s Macintosh Cafe, a forum site for niche communities. The post’s author complains that 4chan’s content no longer feels original, and claims that the events and people we see online — namely politicians and celebrities — are “wholly fictional” and nothing but CGI and deepfakes.
“The Internet feels empty and devoid of people,” they wrote. “Yes, the Internet may seem gigantic, but it’s like a hot air balloon with nothing inside.”
These ideas gained mainstream attention that same year when The Atlantic published an article titled “Maybe You Missed It, but the Internet ‘Died’ Five Years Ago.” The piece argued that, while the theory was far-fetched, it did hold some truth, especially given the influence of bots on internet traffic and social media engagement at the time.
Since then, powerful generative AI tools like ChatGPT and Dall-E have become publicly available, bringing a new dimension to the Dead Internet theory. From e-commerce sites to dating apps, online spaces are increasingly littered with AI-generated slop, making the prospect of an internet entirely devoid of human presence more plausible than ever.
Dead Internet Examples
Although the Dead Internet theory is far from reality, artificial intelligence has certainly taken over online spaces over the years. Below are some of the most prominent examples.
Shrimp Jesus
Shrimp Jesus might as well be the mascot for the Dead Internet theory. The AI-rendered image depicts a Christ-like figure fused with various sea creatures, primarily shrimp and other crustaceans. In some variations Jesus is accompanied by flight attendants, military personnel, babies or kittens, as well.
These and other outlandish images started going viral on Facebook in 2024, racking up thousands of likes and comments — most of which were written by bots. Often linked to accounts that are trying to promote other AI-generated scams, the Shrimp Jesus images are designed to boost user engagement on social media.
The ‘I hate texting’ Tweets
The “I hate texting” tweets are a viral format on Twitter (now X), where posts say “I hate texting,” followed by some alternative activity — “I hate texting I just want to hold your hand,” for example, or “I hate texting I just wanna kiss you.” While the posts appear benign, many are suspected to have been written by bots, and then amplified by other fake accounts
The trend started in 2020, but ever since Elon Musk acquired X and integrated his generative AI tool Grok, the platform has become even more saturated in AI-generated content. Now, X is flooded with other bizarre AI creations, such as Mickey Mouse holding an assault rifle and a woman resembling Kamala Harris dressed as a communist dictator — an image shared on Musk’s personal account — making the platform feel increasingly dominated by bots rather than humans.
Online Dating Scams
Popular dating apps like Tinder, Hinge and Bumble are becoming inundated with fake profiles, with fraudsters using AI-generated photos and bios to lure victims into sending money (usually in the form of cryptocurrency) — a scheme known as “pig butchering.” In fact, according to some research, around 10 percent of all dating profiles are considered fake, and many more are augmented in some way by AI.
Authorities in countries like Canada, Indonesia and Thailand are taking steps to combat these so-called “romance scams,” while dating platforms themselves work to remove fake profiles when they’re flagged. But the technology is getting more sophisticated and harder to detect by the day.
“The power and speed of AI allows scammers to target companies and consumers in highly personalized ways with the ultimate goal of accessing sensitive information or account credentials, which poses significant risks such as financial losses and reputational damage,” Brittany Allen, a senior trust and safety architect at fraud detection startup Sift, told Built In. “Moreover, these scams are capable of circumventing traditional fraud detection techniques.”
Fake Product Reviews
The internet is full of fabricated product reviews and testimonials, ranging from AI-generated Amazon reviews to deepfake celebrity endorsements on social media, which makes it challenging for people to sort fact from fiction while they’re shopping online.
After years of policing the issue on a case-by-case basis, the U.S. Federal Trade Commission is starting to crack down on fraudulent reviews en masse, implementing civil penalties against parties that write or purchase fake reviews, suppress real reviews and buy fake followers or impressions on social media.
Is the Dead Internet Theory True?
Recent studies show that bots account for nearly half of all web traffic, and that a staggering 57.1 percent of the written content we see online is likely AI-generated in some way. But the internet is still a long way from being the artificial wasteland the Dead Internet theory describes.
After all, the majority of bots serve harmless functions like indexing web pages, monitoring online services or fulfilling customer service inquiries, with just 32 percent of them being used to mimic humans for malicious purposes. And social media platforms like YouTube, TikTok and Instagram — not to mention news sites, blogs and web forums — pump out mountains of real, human-made content every day.
Still, there’s no denying that the internet has strayed far from its founders’ original vision, evolving into a space that is increasingly driven by algorithms and commercialization with the rise of artificial intelligence. Search engines, once a reliable tool for accessing accurate information, are becoming more AI-driven, prioritizing content optimized for bots rather than human readers — and, at times, generating their own wildly inaccurate results. And social media sites, once a hub for human connection, are now being flooded with AI content and virtual influencers, who are cheaper and easier to control than their human counterparts.
“People can tell when something is generated by AI; there is something that's lacking in it that makes it feel fake. So I get why people feel like the Dead Internet theory could be happening,” said Jones.
Plus, AI-powered recommendation engines have helped create a hyper-personalized digital environment designed to keep users online for longer. Companies analyze users’ purchase history to display ads they are more likely to click on. And social media apps analyze users’ demographic data to feed them posts they’re more likely to engage with. In the end, the content we see — and don’t see — online is largely dictated by what an algorithm thinks we’ll like best, not necessarily what is accurate or relevant.
So, while AI has certainly made some improvements to the online experience, those improvements have “come at a cost,” Jones said.
“The internet has become less of a place of genuine connection and discovery and more of a place to satisfy one’s pre-existing beliefs and tastes,” he explained. “The internet is less the information superhighway we used to call it and more a giant mall.”
Frequently Asked Questions
What is the Dead Internet theory?
The Dead Internet theory speculates that all web content and activity is driven by bots and generative AI tools, with little to no human involvement.
How many bots are on the internet?
Reports vary, but a 2024 study concluded that bots account for 49 percent of all internet traffic. But of those bots, only 32 percent are being used to mimic humans for malicious purposes, according to researchers. The majority of today’s bots are being used in innocuous ways, such as indexing web pages, monitoring online services or fulfilling customer service inquiries.