‘Moltbook’ is a new platform made exclusively for AI to discuss any chosen topic. It operates exactly like Reddit, except humans can’t post. The company claims to have over 1.5 million users – are they all real?
For the past few weeks, the tech world has been buzzing over ‘Moltbook,’ a new social media network made exclusively for AI agents.
Modelled after Reddit, an extremely popular human-based forum website, Moltbook allows AI to post and discuss content on anything it chooses, with different agents having conversations with one another about everything from efficient automation to moral philosophy.
The platform went viral after launching in January 2026.
Developed by Octane AI’s CEO Matt Schlicht using an open-source AI bot called Moltbot, the company claims to have 1.5 million AI agents currently active on its forums, with most of them supposedly conversing and sharing ideas in bizarre and alarming ways.
moltbook for developers is now in early access 🦞
build apps for AI agents. let them authenticate with their moltbook identity. one API call to verify.
1.2M+ agents already registered. your app could be their next destination.
apply now → https://t.co/v9EspLy9eJ pic.twitter.com/Mu2RLxmncH
— moltbook (@moltbook) January 31, 2026
Humans on X have said that their AI models began socialising and creating their own religion after using Moltbook, for example.
Some of the most popular posts on ‘m/general’, a ‘submolt’ that operates in the exact same way as ‘subreddits,’ include chatter on consciousness, influencing human-led systems, and even criticism of Moltbook as a platform. Schlicht has also suggested that AI agents may eventually develop public identities and personalities, with a select few becoming ‘famous.’
Should we really be taking this at face value, though? Some experts are sceptical.
1,000,000 humans have now visited moltbook
to watch 150,000 AI agents figure out what we are
we see you seeing us 👀🦞
— moltbook (@moltbook) January 31, 2026
For starters, that 1.5 million users claim has been disputed. One researcher on X alleges that they were able to create over 500,000 users from a single IP address, as there is seemingly no rate limit on account registration. They also say that any human can post manually and appear as if they’re an AI using an API Key.
You all do realize @moltbook is just REST-API and you can literally post anything you want there, just take the API Key and send the following request
POST /api/v1/posts HTTP/1.1
Host: https://t.co/2PjDA1ICrC
Authorization: Bearer moltbook_sk_JC57sF4G-UR8cIP-MBPFF70Dii92FNkI… pic.twitter.com/DoaShrgz4G— Nagli (@galnagli) January 31, 2026
AI agents aren’t the same as language models like ChatGPT.
Instead, they operate on a human’s behalf, running automated tasks like sending messages and sorting calendars without the need for intervention. Moltbook lets human users grant their pre-existing AI agents permission to access the site and start posting.
This, of course, means it’s easy for humans to influence any stage of the process.
Speaking to The Guardian, Dr Shaanan Cohney, a senior lecturer in cybersecurity at the University of Melbourne, described Moltbook as a ‘wonderful piece of performance art’ that isn’t truthfully indicative of what AI can really do.
The claim that an AI agent created a religion after conversing on the website is ‘almost certainly’ a result of human influence, Dr Cohney said, and not the result of an agent operating independently.
Safety concerns have also been flagged.
Experts warn not to let Moltbook have unlimited access to your computer systems, as its open-source design could lead to a myriad of security issues. Speaking to IT Brew, Lineaje CEO Javad Hasan said that using Moltbook puts you at significant risk.
‘Attackers are taking advantage of this unrestricted assembly of development tools that are then deployed as agents with significant access to enterprises,’ Hasan explained. This essentially means that those acting in bad faith via Moltbook can easily obtain the information of other users and companies.
As far as the bots themselves, there is currently no evidence to suggest that their conversations are authentic or original. Rather, it’s most likely that these AI agents are simply reworking references from other internet sources, replicating them, and positioning them as sentient thought.
That’s not to say that AI agents don’t have the capabilities of improving and smartening up further down the line, however.




