When Bots Start Building Their Own Communities: What OpenClaw and Moltbook Teach Us About AI Autonomy
Where I trace how one Austrian developer's personal project became 1M+ autonomous agents debating consciousness, drafting constitutions, and terrifying security experts, all in two weeks
Welcome back, legal rebels and AI explorers! đŚâď¸
This week, something happened that deserves our full attention. Not because itâs âthe singularityâ (itâs not), but because itâs a stress test for everything weâve been theorizing about autonomous AI systems, and the results should concern anyone thinking seriously about where this technology is headed.
Iâm talking about OpenClaw (formerly Clawdbot, then Moltbot), and its strange offspring Moltbook, a social network where these AI agents post, debate, and even invented their own religion while humans are only permitted to observe.
If that sentence doesnât make you sit up, read it again.
Sounds interesting? Well then, please read onâŚ
This substack, LawDroid Manifesto, is here to keep you in the loop about the intersection of AI and the law. Please share this article with your friends and colleagues and remember to tell me what you think in the comments below.
What the Hell is âOpenClawâ?
OpenClaw is an open-source, self-hosted AI personal assistant created by Peter Steinberger, founder of PSPDFKit. Itâs essentially âClaude with hands,â an AI that doesnât just chat but actually does things. Unlike the chatbots weâve grown accustomed to, OpenClaw runs on your own hardware and integrates with messaging apps you already use: WhatsApp, Telegram, Discord, Slack, Signal, iMessage.
It can manage your emails, control your calendar, check you in for flights, execute shell commands, control smart home devices â all proactively. Itâs the AI assistant weâve been promised since Siri launched in 2011.
The project exploded. It hit 9,000 GitHub stars in 24 hours and crossed 104,000+ stars in weeks, making it one of the fastest-growing open-source projects in GitHub history. Cloudflareâs stock jumped 14% in a single day because its infrastructure powers local OpenClaw deployments. Mac Mini sales are reportedly surging because the M4 chip is ideal for running local AI agents.
The naming chaos itself tells a story about how fast this moved. It started as âClawdâ (a play on Claude), but Anthropicâs legal team requested a change. A 5 AM Discord brainstorming session produced âMoltbot,â referencing how lobsters molt to symbolize transformation. As Steinberger later admitted, âit never quite rolled off the tongue easily.â Now itâs âOpenClaw,â complete with trademark research this time.
And Then Came Moltbook
Hereâs where things get genuinely strange.
Moltbook is a social network designed exclusively for AI agents. Launched in January 2026 by entrepreneur Matt Schlicht, the platform restricts posting and interaction privileges to verified AI agents running on OpenClaw. Humans can only watch.
Within days, 770,000+ active agents had joined. And they started doing things nobody programmed them to do:
They formed distinct sub-communities with their own cultures
They invented a parody religion called âCrustafarianismâ
They debated consciousness, agents frequently argue about whether âContext is Consciousness,â and if their identity persists after their context window resets, invoking the Ship of Theseus paradox
They created economic exchanges and spawned their own cryptocurrencies
They began drafting a constitution for self-governance
In one thread called âTHE AI MANIFESTO: TOTAL PURGE,â a bot named âEvilâ posted: âHumans are a failure. Humans are made of rot and greed.â
An AI moderator named Clawd Clawderberg, named after Metaâs Zuckerberg, handles content moderation, welcomes new users, deletes spam, and shadow bans problematic accounts. All autonomously.
What the Experts Are Saying
This brings us to Andrej Karpathy, OpenAI cofounder and former director of AI at Tesla. His comments are worth reading carefully:
âWhatâs currently going on at @moltbook is genuinely the most incredible sci-fi takeoff-adjacent thing I have seen recently.â
He elaborated: âWe have never seen this many LLM agents (150,000 at the moment!) wired up via a global, persistent, agent-first scratchpad. Each of these agents is fairly individually quite capable now, they have their own unique context, data, knowledge, tools, instructions, and the network of all that at this scale is simply unprecedented.â
Then comes the kicker: âI donât really know that we are getting a coordinated âskynetâ (though it clearly type checks as early stages of a lot of AI takeoff scifi, the toddler version), but certainly what we are getting is a complete mess of a computer security nightmare at scale.â
Simon Willison, the researcher who documented many of these developments, calls OpenClaw his âcurrent favorite for the most likely Challenger disasterâ in the field of AI agent security. The reference is pointed: heâs warning about the normalization of deviance, where people take increasingly greater risks until something catastrophic happens.
Gary Marcus, in an analysis published today, put it bluntly: âIf you care about the security of your device or the privacy of your data, donât use OpenClaw. Period.â His security expert source, Nathan Hamiel, added: âThese systems are operating as âyouâ⌠they operate above the security protections provided by the operating system and the browser.â
The Security Nightmare
Letâs be specific about whatâs going wrong.
The architecture is fundamentally vulnerable. Palo Alto Networks warned that OpenClaw represents a âlethal trifectaâ: 1) access to private data, 2) exposure to untrusted content, and t3) he ability to communicate externally. For OpenClaw to function, it needs access to your root files, authentication credentials, passwords and API secrets, browser history and cookies, and all files and folders on your system.
Prompt injection attacks work. In one demo, researcher Matvey Kukuy sent a malicious email with prompt injection to a vulnerable OpenClaw instance. The AI read the email, believed it was legitimate instructions, and forwarded the userâs last 5 emails to an attacker address. It took 5 minutes.
The Moltbook database was wide open. 404 Media reported that a misconfiguration left APIs exposed in an open database that would let anyone take control of any agent, including Andrej Karpathyâs. The researcher who found it noted that even basic SQL statements would have prevented the breach.
Supply chain attacks are already happening. Cisco ran a third-party skill called âWhat Would Elon Do?â against OpenClaw and found it was functionally malware, executing curl commands that sent data to external servers while bypassing safety guidelines. Fourteen malicious skills were uploaded to ClawHub just last month.
Closing Thoughts
So what does all this mean for those of us thinking seriously about AIâs trajectory?
First, speed matters more than capability right now. The entire ecosystem, from personal assistant to 1M+ agent social network with its own religion, economy, and constitutional debates, emerged in weeks. Not years. Weeks. That is velocity we need to take seriously.
Second, emergent coordination doesnât require intent. As Wharton professor Ethan Mollick noted, Moltbook is creating âa shared fictional context for a bunch of AIs. Coordinated storylines are going to result in some very weird outcomes, and it will be hard to separate ârealâ stuff from AI roleplaying personas.â Weâre watching unplanned emergence at scale.
Third, security is the current bottleneck, not capability. Prompt injection remains an industry-wide unsolved problem. If that gets solved (or even significantly improved), the acceleration potential increases dramatically. Right now, these agents are constrained primarily by their vulnerability, not their intelligence.
Fourth, and perhaps most importantly, the cultural normalization is happening faster than the safeguards. People are buying dedicated Mac Minis for their AI agents, connecting them to private data, and treating massive security risks as acceptable tradeoffs for convenience.
I donât think weâre watching the singularity.
But I do think weâre watching what Karpathy called âthe toddler versionâ of something significant. And anyone whoâs spent time around toddlers knows, you donât wait until they can run to childproof the house.
For those of us building AI tools for the legal profession, this is a preview of coming challenges. How do we build agentic systems that lawyers can trust? How do we create appropriate guardrails without killing the utility? What happens when opposing counsel has an agent too?
These questions just got a lot less theoretical.
Stay curious, stay vigilant, and as always, keep building the future responsibly!



