Clawdbot
Clawdbot (Twitter, Showcase, Documentation, GitHub):
Clears your inbox, sends emails, manages your calendar, checks you in for flights. All from WhatsApp, Telegram, or any chat app you already use.
To say that Clawdbot has fundamentally altered my perspective of what it means to have an intelligent, personal AI assistant in 2026 would be an understatement. I’ve been playing around with Clawdbot so much, I’ve burned through 180 million tokens on the Anthropic API (yikes), and I’ve had fewer and fewer conversations with the “regular” Claude and ChatGPT apps in the process. Don’t get me wrong: Clawdbot is a nerdy project, a tinkerer’s laboratory that is not poised to overtake the popularity of consumer LLMs any time soon. Still, Clawdbot points at a fascinating future for digital assistants, and it’s exactly the kind of bleeding-edge project that MacStories readers will appreciate.
Clawdbot can be overwhelming at first, so I’ll try my best to explain what it is and why it’s so exciting and fun to play around with. Clawdbot is, at a high level, two things:
- An LLM-powered agent that runs on your computer and can use many of the popular models such as Claude, Gemini, etc.
- A “gateway” that lets you talk to the agent using the messaging app of your choice, including iMessage, Telegram, WhatsApp and others.
[…]
Given the right permissions, Clawdbot can execute Terminal commands, write scripts on the fly and execute them, install skills to gain new capabilities, and set up MCP servers to give itself new external integrations. Combine all this with a vibrant community that is contributing skills and plugins for Clawdbot, plus Steinberger’s own collection of command-line utilities, and you have yourself a recipe for a self-improving, steerable, and open personal agent that knows you, can access the web, runs on your local machine, and can do just about anything you can think of.
Apps will melt away. The prompt is your new interface.
It’s the fact that clawd can just keep building upon itself just by talking to it in discord is crazy. The future is already here.
I can understand why people love @clawdbot so much.
I wanted to automate some tasks from Todoist and clawd was able to create a skill for it on its own, all within a Telegram chat.
Now it:
- Logs my sleep/health/exercise data and tells me when I stay up too late
- Writes code and deploys it
- Writes Ralph loop markdown files that I deploy later
- Updates Obsidian daily notes
- Tracks who visits MenuCapture and where they came from
- Monitors earthquakes in Tokyo
- Researches stuff online and saves files to my desktop
- Manage memory across sessions by remembering my projects, patterns and preferences
- Reminds me of my schedule, including holidays/accommodation
- Checks on me (on Telegram!) if I’m quiet too long
I’m literally on my phone in a telegram chat and it’s communicating with codex cli on my computer creating detailed spec files while out on a walk with my dog.
At this point I don’t even know what to call @clawdbot. It is something new. After a few weeks in with it, this is the first time I have felt like I am living in the future since the launch of ChatGPT.
Previously:
- The Year in LLMs: 2025
- Apple LLM Generating SwiftUI
- Sky Preview
- Model Context Protocol (MCP) Tools for Mac
- Apple Delays “More Personalized Siri” Apple Intelligence Features
Update (2026-01-28): John Gruber:
This essay from Viticci is the first thing I’ve seen that really helped me start to understand it.
Those tokens aren’t free. I asked Viticci just how much “yikes” cost, and he said around US$560 — using way more input than output tokens.
Had a great chat with Peter Steinberger about @clawdbot, the open-source AI personal assistant with a lobster-sized personality. 🦞
I think part of Clawdbot’s success is from being something a big co would never make.
Too much liability, messy business model, risky to deploy, ecosystem comparability…
That’s why it’s so great: it’s just useful instead of trying to be those other things.
The amount of crap I get for putting out a hobby project for free is quite something.
People treat this like a multi-million dollar business. Security researchers demanding a bounty. Heck, I can barely buy a Mac Mini from the Sponsors.
It’s supposed to inspire people. And I’m glad it does.
And yes, most non-techies should not install this. It’s not finished, I know about the sharp edges.
There’s more discussion on Hacker News and at Mac Power Users.
15 Comments RSS · Twitter · Mastodon
Absolutely depressing. The unconditional delegation of even basic tasks to this thing. The dependence on it that will inevitably take place. Also, think about the level of access to your stuff this thing is allowed to have. What happens when a bad actor takes advantage of that? How secure is this thing?
The lack of security is the big issue for me. I don’t trust these companies to have that kind of access. Especially not together, connected in some way
"What happens when a bad actor takes advantage of that?"
You don't even need a bad actor, just an accidental, incorrect action from the LLM. It's funny to me that I'm running opencode in Docker just so it can't fuck with unversioned files, and other people are like, "Here's my whole life, go wild!"
They all sounds so giddy, like small children with their new toy, until the disaster happens.
Are these serious people or just bloggers who pretend to know software, and just write for clicks.
Gotta clear out my RSS/Instapaper backlog to make room for the inevitable trove of stories about people growing dependent on these services, only to watch them go belly up when there’s no money to be made. Bricked smart appliances have nothing on what’s about to happen next.
@Vajk The main developer, Peter Steinberger, is a respected developer who built the best iOS/Android library for working with PDF files.
I do find it all rather scary, and I hope to see better ways to isolate what it can do, but I think this is something to watch.
> They all sounds so giddy, like small children with their new toy, until the disaster happens.
Conspiracy theory - are these people just peddling this shit so nobody else takes the time to learn to code to make themselves more valuable? Tell the kids that coding is an obsolete skill so nobody else learns it.
I'm still skeptical that writing essays to an LLM and then having to spend time reading the code it generates to make sure it's good is the "new interface." Of course if you don't care and you don't read the code - I guess you're more "productive." While I think LLMs can be useful I think some people are getting a little carried away here.
@Michael Tsai
I am not keen to let an LLM wrapper, written largely by LLMs(and bragging about it), loose on anything.
No one who cares about safety, privacy and security should be ok with that.
I’m pretty skeptical of Peter’s decision-making process. He’s brilliant but unchecked, kind of like Tony Stark. Right now, his pinned tweet from December 29, 2025, says: “Confession: I ship code I never read.”
That said, he definitely has his fans and plenty of contributors. I’m intrigued by the integrations, especially exactly how they automated Messages.app. If I remember correctly and nothing has changed, you used to have to quit or relaunch Messages.app just to get it to write its data to disk so you could read new incoming messages.
Seeing as the whole project could be useful running on a separate machine in a user account with its own iCloud credentials, I’m also planning to check it out to see how it fits my privacy and security standards.
"I'm still skeptical that writing essays to an LLM and then having to spend time reading the code it generates"
That's the trick. They don't. They just hope that issues eventually fix themselves.
Maybe they're right, and LLMs get better faster than they can generate shitty code, so as their cognitive debt grows, so does the LLM's ability to do their tasks. Slowly, the human is replacing itself with the machine.
Or perhaps they're wrong, and LLM progress slows down, and they end up with tens or hundreds of thousands of lines of code, no idea how anything works, and no ability to make progress or fix bugs.
We'll see.
> He’s brilliant but unchecked, kind of like Tony Stark. Right now, his pinned tweet from December 29, 2025, says: “Confession: I ship code I never read.”
Maybe he's brilliant, maybe he's not but it really doesn't matter the only thing that matters is what he's doing. At the very least I think he's putting his reputation on the line. People should be aware that he's shipping code that he isn't reading. We all put trust in code we don't read to some extent (for ex: using system frameworks etc.) but code not even written or evaluated by a human well that still seems crazy to me - it's a good thing as of right now this vibe coding craze is relegated to dinky consumer apps.
More importantly what does this teach the next generation when we have developers who know how to code but don't want to basically proclaim the skill of coding is obsolete? You're encouraging new people not to learn anything. So the next generation of "app makers" how are they going to know how to solve a bug? Change the model and just hope it works. Maybe he can work his way out of a hairy situation because he has dev experience but what about the fucking children?
> Maybe they're right, and LLMs get better faster than they can generate shitty code, so as their cognitive debt grows, so does the LLM's ability to do their tasks. Slowly, the human is replacing itself with the machine.
I think they are wrong. My current approach--write code, read code, use an LLM as a tool when it makes sense still seems like the right choice right now. If I'm wrong in the future...well... the idea that this is some "hard skill" so you better start talking to your LLM and stop wasting time reading code so you don't become obsolete is funny because in that world you are mostly obsolete anyway (and presumably this would apply to other professions). If you're smart enough to learn how to write an app the old fashioned way - getting "up to speed" in the future world where you ask an LLM to do everything for you should be a breeze.
What is even happening right now in the AI pond?
At the beginning of the year Steve Yegge came out with Gas Town which he created his own terminology to describe things that already have a name in his LLM driven workflow...thing. I would have chalked it up to one of his bombastic posts, but it even came complete with a supporting crypto rug pull.
Now we have clawdbot where people praising it sound somewhat unhinged.
I think LLMs can be helpful, if used judiciously, but all of this just adds to the feeling that something very bad is about to happen with all of this AI hype.
I get the caution, but there's a lack of perspective here.
Yes, trust is a real issue, but last time I checked, it was for humans too.
Anyone who employs more than one coder likely ships code they haven't read.