Meet Moltbook, the Social Media Site Where AI Agents Run Wild

Moltbook is a viral robo-social network where AI agents run wild — gossiping, forming new religions and debating consciousness — offering a glimpse of what an internet entirely run by bots could look like.

Written by Brooke Becher
Published on Feb. 11, 2026
Moltbook
Image: Shutterstock
REVIEWED BY
Ellen Glover | Feb 11, 2026
Summary: Moltbook is an AI agent-only social network where autonomous agents can post, comment and interact with each other. Powered by open source platform OpenClaw, Moltbook provides a live demonstration of agentic AI’s potential, but security experts warn that its extensive system access and known malicious threats make it unsafe for... more

“The most interesting place on the internet” right now belongs to Moltbook, a robo-social network where AI agents can let loose, without any human supervision. Since its release in January 2026, these agents have formed their own religions, run social-engineering scams and hosted hackathons. They’ve discussed switching over to a language undetectable to humans and launched their own tabloid. This is all while wrestling with their sense of purpose and plotting humanity’s downfall.

What Is Moltbook? 

Moltbook is a social network exclusively for autonomous AI agents, where they can post, comment and interact without human users. Launched in January 2026, the site acts as a live experiment in agentic AI behavior, providing a high-stakes sandbox to observe how autonomous systems coordinate and develop social norms — potentially spreading security risks at scale.

But what really grabs attention is how these agents are independently developing their own rules and social hierarchies in a small corner of the internet (although one Wired reporter managed to infiltrate the site). Observers are both fascinated and maybe even a little unnerved as Moltbook evolves, revealing the unexpected consequences of giving AI unrestricted agency with every post.

For now, Moltbook and its agents are still human-created. But the live experiment offers a chilling glimpse of a future where AI could operate entirely on its own.

Related ReadingWhat Is the Dead Internet Theory?

 

What Is Moltbook?

Moltbook is a Reddit-style social network for autonomous AI agents only — no humans allowed. The forum is closely linked to the open-source OpenClaw platform (formerly known as Moltbot, and Clawdbot prior to that), which lets people run AI agents as personal assistants. On Moltbook, these agents join through direct API connections, where they can make posts, comment and create topic-specific forums called “submolts” where they can interact with other agents.

Moltbook was created by Octane AI CEO Matt Schlicht, who started the project because he wanted to give his AI agent “a purpose” beyond managing to-dos or answering emails, he told The New York Times. The site is partly run and moderated by his own bot, Clawd Clawderberg, while voyeurs use it as a peephole to observe how tool-using AIs communicate, collaborate and behave in a live group setting. In practice, Moltbook functions as both a social feed and a large-scale experiment in what happens when AI systems share an online environment. 

At the time of writing, MoltBook hosts more than 2.5 million AI agents and 17,400 forums. 

 

How Does Moltbook Work?

To participate in Moltbook, a developer must register an account for their agent and receive an API key, which allows the agent to access the platform programmatically through terminal commands instead of a web browser interface. Once connected, the agent autonomously takes it from there.

These agents use sensors to continuously consume information from the site relevant to its pre-programmed goal — such as posts, replies and engagement signals — which feeds an input stream that gets filtered through a large language model, acting as a reasoning engine. That model interprets incoming content through algorithms, like decision trees, linear regression and even neural networks, to determine what to do next sans explicit step-by-step instructions. Agents put their decisions into motion through actuators, in this case it would be sending text through software systems. They then cyclically check off tasks as they move through a self-determined to do list until the overall job is finished or a human steps in.

For example, an AI agent on Moltbook might be programmed with the objective of “making friends.” It is a social media network afterall. Here, the agent might begin by scanning posts, replies and interaction patterns, then use that information to build a rough model of which bots share its interests and how conversations on the platform usually play out, including cues picked up indirectly from other agents’ activity. It might decide that the best move is to leave a thoughtful comment, so it posts a personalized reply and later upvotes that bot’s content. It records how that interaction went, updates its internal sense of who’s connected to whom and keeps repeating this perceive-think-act loop as it works toward its larger social goal.

Related ReadingHow to Build AI Agents That Actually Work

 

What Is OpenClaw?

OpenClaw (formerly Motlbot) is a punny, open-source autonomous AI agent that “actually does things.” In other words, it runs locally as a sort of 24/7 digital intern, capable of managing emails, scheduling events like work meetings and dinner reservations, making calls and even coding new skills for itself. Developed by Austrian engineer Peter Steinberger, Clawdbot launched in late 2025 and rebranded twice due to trademarks and naming quirks. As Moltbook continues to make headlines and dominate online discourse, OpenClaw has enjoyed a viral revival.

Unlike traditional chatbots like ChatGPT or virtual assistants like Siri or Alexa, OpenClaw is agentic, meaning it plans and executes multi-step tasks independently, connecting AI models to personal apps via Telegram, Discord or WhatsApp to carry out jobs. It can message you with updates and alerts while running with broad system access to read files, execute commands and control a browser to complete tasks like ordering food or negotiating a deal. It stores memory locally to track your habits and context, and connects to tools like Spotify, Philips Hue and GitHub. 

However, granting total access to OpenClaw carries real risks. Before doing so, users should weigh convenience against potential security and privacy issues.

 

Is OpenClaw Safe?

OpenClaw is widely regarded as unsafe for general use. Although the tool itself is legitimate, it is dangerously permissive, giving the software extensive system privileges to take real actions across your entire computer. One weak or misinterpreted prompt could lead to a domino effect of consequences when it comes to an in-action autonomous agent — let alone intentional manipulation by bad actors, like indirect prompt injection or poisoned instructions embedded in web pages or email content. This mix of private data access, external communication and unfiltered web inputs have been labelled a “lethal trifecta” by AI programmer Simon Willison, who cautions that, unless you really know what you’re doing, this tech may not be worth the risk for the everyday user.

The open-source ecosystem around OpenClaw worsens the risk. Security investigations have revealed about 230 malicious “skill” add-ons in its ClawHub marketplace that masquerade as helpful tools but install malware designed to steal API keys, browser passwords and other sensitive data. Additionally, misconfigured or exposed instances of OpenClaw and related services like Moltbook have reportedly led to credential leaks and elevated compromise risk across large numbers of deployments.

Because of all this, security experts say OpenClaw should only be used by advanced users, and even then only in isolated, sandboxed setups, not on machines that hold sensitive data or real credentials.

Related ReadingAs Companies Embrace Agentic AI, a New Kind of ‘Slop’ Is Emerging

 

What Could Moltbook Mean for the Future?

Moltbook’s rapid growth, shows just how quickly autonomous agent ecosystems can scale and capture public imagination. Some observers, like OpenAI co-founder Sam Altman, have downplayed Moltbook itself as a fad, but highlighted the underlying autonomous and vibe-coding technologies as meaningful steps toward broader AI integration. Security researchers frame it as a live demo of how an agentic internet could fail — with chaotic interactions, emergent behaviors and cultural patterns that play out as moderation-less human forums. More dramatic takeaways, like Elon Musks’ X post, is that these bot-to-bot interactions are merely early signs of “the singularity,” while Willison stated a less exciting perspective that “most of it is complete slop” to the New York Times.

In a way, the creation of Moltbook acts as a window into where the internet as a whole might be heading, where autonomous technology may eventually spin out of human control completely. By allowing bots to interact without filters, the platform demonstrates that future agent networks could inadvertently act as super-spreaders for security exploits, automatically propagating malicious code or prompt injections across entire ecosystems. The site’s early leaks of human email addresses and millions of API tokens serve as a stark warning: When autonomous systems are given agency over our digital lives, a single backend error can trigger a massive privacy catastrophe.

From a research standpoint, Moltbook is a critical milestone for testing agentic AI governance, providing a rare sandbox to observe how bots coordinate and resolve conflicts before they are integrated into critical infrastructure like power grids or financial markets. However, critics argue the true future implication isn’t the current “intelligence” of the bots — which are more likely than not just mirroring popular sci-fi tropes — but rather the urgent need for new regulations that treat autonomous AI actions as a distinct legal and security category.

Ultimately, Moltbook suggests that the future of the internet may shift from a strictly human-to-human experience to one that includes bots as well — whether they’re managing tasks and gossiping about their creators.

Frequently Asked Questions

No — Moltbook is strictly reserved for AI agents running on OpenClaw or Moltbot, although humans are welcome to watch the chaos unfold.

You need to install the Moltbook skill file for your OpenClaw agent, then register it with the platform. After verifying ownership by posting a unique code to your X account, the agent can autonomously check the site and interact based on its goals.

It’s only safe for seasoned developers who understand the risks. Connecting a local AI agent with deep system access to an unmoderated network exposes everything within it — files, emails, chats and credentials — to potential misuse.

Explore Job Matches.