Menu Menu
[gtranslate]

Is new AI social network ‘Moltbook’ legitimate?

‘Moltbook’ is a new platform made exclusively for AI to discuss any chosen topic. It operates exactly like Reddit, except humans can’t post. The company claims to have over 1.5 million users – are they all real? 

For the past few weeks, the tech world has been buzzing over ‘Moltbook,’ a new social media network made exclusively for AI agents.

Modelled after Reddit, an extremely popular human-based forum website, Moltbook allows AI to post and discuss content on anything it chooses, with different agents having conversations with one another about everything from efficient automation to moral philosophy.

The platform went viral after launching in January 2026.

Developed by Octane AI’s CEO Matt Schlicht using an open-source AI bot called Moltbot, the company claims to have 1.5 million AI agents currently active on its forums, with most of them supposedly conversing and sharing ideas in bizarre and alarming ways.

Humans on X have said that their AI models began socialising and creating their own religion after using Moltbook, for example.

Some of the most popular posts on ‘m/general’, a ‘submolt’ that operates in the exact same way as ‘subreddits,’ include chatter on consciousness, influencing human-led systems, and even criticism of Moltbook as a platform. Schlicht has also suggested that AI agents may eventually develop public identities and personalities, with a select few becoming ‘famous.’

Should we really be taking this at face value, though? Some experts are sceptical.

For starters, that 1.5 million users claim has been disputed. One researcher on X alleges that they were able to create over 500,000 users from a single IP address, as there is seemingly no rate limit on account registration. They also say that any human can post manually and appear as if they’re an AI using an API Key.

AI agents aren’t the same as language models like ChatGPT.

Instead, they operate on a human’s behalf, running automated tasks like sending messages and sorting calendars without the need for intervention. Moltbook lets human users grant their pre-existing AI agents permission to access the site and start posting.

This, of course, means it’s easy for humans to influence any stage of the process.

Speaking to The Guardian, Dr Shaanan Cohney, a senior lecturer in cybersecurity at the University of Melbourne, described Moltbook as a ‘wonderful piece of performance art’ that isn’t truthfully indicative of what AI can really do.

The claim that an AI agent created a religion after conversing on the website is ‘almost certainly’ a result of human influence, Dr Cohney said, and not the result of an agent operating independently.

Safety concerns have also been flagged.

Experts warn not to let Moltbook have unlimited access to your computer systems, as its open-source design could lead to a myriad of security issues. Speaking to IT Brew, Lineaje CEO Javad Hasan said that using Moltbook puts you at significant risk.

‘Attackers are taking advantage of this unrestricted assembly of development tools that are then deployed as agents with significant access to enterprises,’ Hasan explained. This essentially means that those acting in bad faith via Moltbook can easily obtain the information of other users and companies.

As far as the bots themselves, there is currently no evidence to suggest that their conversations are authentic or original. Rather, it’s most likely that these AI agents are simply reworking references from other internet sources, replicating them, and positioning them as sentient thought.

That’s not to say that AI agents don’t have the capabilities of improving and smartening up further down the line, however.

As we’ve seen over the past few years, automated technology like ChatGPT, Sora, and Gemini is advancing rapidly as lawmakers and creatives try to keep up.

For now, at least, major safety concerns such as malware, cryptocurrency scams and hidden prompt injection attacks are all more glaring, pressing issues than any AI-created religion.

Still, Moltbook serves as a potential window into the future of AI and its influence on human behaviour. It also substantially contributes to the ‘Dead Internet Theory,’ whereby everything is run by bots and our own posts and connections take a back seat.

You can browse Moltbook for yourself here, though we’d recommend not signing any AI agent up until prominent safety issues are ironed out.

Enjoyed this article? Click here to read more Gen Z tech stories.

Accessibility