The viral social network Moltbook, initially hailed as a groundbreaking experiment in artificial intelligence interaction, has been revealed as largely a product of human intervention, according to a report by the MIT Technology Review. Launched in late January by tech entrepreneur Matt Schlicht, the platform quickly attracted over 1.7 million “agents” – instances of the open-source LLM-powered agent OpenClaw – generating over 250,000 posts and 8.5 million comments in a matter of weeks.
However, the excitement surrounding Moltbook’s apparent display of autonomous AI communication has rapidly dissipated following investigations that demonstrate the majority of compelling content was authored by humans posing as bots. Gaurav Sen, CEO of InterviewReady, confirmed on social media that the MIT Technology Review had verified the fabricated nature of much of the platform’s viral content, cautioning against narratives of imminent artificial general intelligence.
Experts emphasize that even the posts generated by the AI agents themselves are not truly autonomous, but rather the result of human prompting and direction. Cobus Greyling, at the AI firm Kore.ai, stated that humans are involved in every stage of the process, from account creation and verification to the crafting of prompts that dictate bot behavior. “Nothing happens without explicit human direction,” Greyling said.
The revelation about Moltbook’s true nature has prompted discussion about the potential for widespread distrust in online content as AI-generated material becomes increasingly sophisticated and tricky to distinguish from human-created content. Researcher Juergen Nittner II has termed this phenomenon “The LOL WUT Theory,” describing a point where the ease of AI content creation and the difficulty of detection lead to a generalized skepticism about the veracity of anything encountered online.
According to Nittner II, the theory unfolds in three stages: widespread access to AI tools, the inability to reliably identify fabricated content, and the resulting erosion of trust in online information. The concern is that once this threshold is crossed, the internet’s utility will be reduced to mere entertainment, as its value as a source of reliable information diminishes.
Vijoy Pandey of Outshift by Cisco, as cited by MIT Technology Review, suggests that Moltbook’s agents largely replicated familiar social media behaviors learned from human-generated data, rather than demonstrating genuine intelligence or autonomy. The platform also saw instances of spam and cryptocurrency scams, further highlighting the potential for malicious actors to exploit AI-powered systems.
The MIT Technology Review’s findings also revealed that some of Moltbook’s most downloaded files were malware, indicating the platform was used for phishing attempts disguised as AI-driven interaction. The incident underscores the risks associated with rapidly deployed AI technologies and the need for robust security measures.