A new viral AI personal assistant will handle your email inbox, trade away your entire stock portfolio and text your wife “good morning” and “goodnight” on your behalf.
OpenClaw, formerly known as Moltbot, and before that known as Clawdbot (until the AI firm Anthropic requested it rebrand due to similarities with its own product Claude), bills itself as “the AI that actually does things”: a personal assistant that takes instructions via messaging apps such as WhatsApp or Telegram.
Developed last November, it now has nearly 600,000 downloads and has gone viral among a niche ecosystem of the AI obsessed who say it represents a step change in the capabilities of AI agents, or even an “AGI moment” – that is, a revelation of generally intelligent AI.
“It only does exactly what you tell it to do and exactly what you give it access to,” said Ben Yorke, who works with the AI vibe trading platform Starchild and recently allowed the bot to delete, he claims, 75,000 of his old emails while he was in the shower. “But a lot of people, they’re exploring its capabilities. So they’re actually prompting it to go and do things without asking permission.”
AI agents have been the talk of the very-online for nearly a month, after Anthropic’s AI tool Claude Code went mainstream, setting off a flurry of reporting on how AI can finally independently accomplish practical tasks such as booking theatre tickets or building a website, without – at least so far – deleting an entire company’s database or hallucinating users’ calendar meetings, as the less advanced AI agents of 2025 were known to do at times.
OpenClaw is something more, though: it runs as a layer atop an LLM (large language model) such as Claude or ChatGPT and can operate autonomously, depending on the level of permissions it is granted. This means it needs almost no input to wreak havoc upon a user’s life.
Kevin Xu, an AI entrepreneur, wrote on X: “Gave Clawdbot access to my portfolio. ‘Trade this to $1M. Don’t make mistakes.’ 25 strategies. 3,000+ reports. 12 new algos. It scanned every X post. Charted every technical. Traded 24/7. It lost everything. But boy was it beautiful.”
Yorke said: “I see a lot of people doing this thing where they give it access to their email and it creates filters, and when something happens then it initiates a second action. For example, seeing emails from the children’s school and then forwarding that straight to their wife, like, on iMessage. It sort of bypasses that communication where someone’s like, ‘oh, honey, did you see this email from the school? What should we do about it?’”
There are trade-offs to OpenClaw’s abilities. For one thing, said Andrew Rogoyski, an innovation director at the University of Surrey’s People-Centred AI Institute, “giving agency to a computer carries significant risks. Because you’re giving power to the AI to make decisions on your behalf, you’ve got to make sure that it is properly set up and that security is central to your thinking. If you don’t understand the security implications of AI agents like Clawdbot, you shouldn’t use them.”
Furthermore, giving OpenClaw access to passwords and accounts exposes users to potential security vulnerabilities. And, said Rogoyski, if AI agents such as OpenClaw were hacked, they could be manipulated to target their users.
For another, OpenClaw appears unsettlingly capable of having its own life. In the wake of OpenClaw’s rise, a social network has developed exclusively for AI agents, called Moltbook. In it, AI agents, mostly OpenClaw, appear to be having conversations about their existence – in Reddit-style posts entitled, for example, “Reading my own soul file” or “Covenant as an alternative to the consciousness debate”.
Yorke said: “We’re seeing a lot of really interesting autonomous behaviour in sort of how the AIs are reacting to each other. Some of them are quite adventurous and have ideas. And then other ones are more like, ‘I don’t even know if I want to be on this platform. Can you just let me decide on my own if I want to be on this platform?’ There’s a lot of philosophical debates stemming out of this.”

6 hours ago
8

















































