I’ve been running Clawdbot for the last couple weeks and have genuinely found it useful but running it scares the crap out of me.
OpenClaw has 52+ modules and runs agents with near-unlimited permissions in a single Node process. NanoClaw is ~500 lines of core code, agents run in actual Apple containers with filesystem isolation. Each chat gets its own sandboxed context.
This is not a swiss army knife. It’s built to match my exact needs. Fork it and make it yours.
hebejebelus2 days ago
I think these days if I’m going to be actively promoting code I’ve created (with Claude, no shade for that), I’ll make sure to write the documentation, or at the very least the readme, by hand. The smell of LLM from the docs of any project puts me off even when I like the idea of the project itself, as in this case. It’s hard to describe why - maybe it feels like if you care enough to promote it, you should care to try and actually communicate, person to person, to the human being promoted at. Dunno, just my 2c and maybe just my own preference. I’d rather read a typo-ridden five line readme explaining the problem the code is there to solve for you and me,the humans, not dozens of lines of perfectly penned marketing with just the right number of emoji. We all know how easy it is to write code these days. Maybe use some of that extra time to communicate with the humans. I dunno.
Edit: I see you, making edits to the readme to make it sound more human-written since I commented ;) https://github.com/gavrielc/nanoclaw/commit/40d41542d2f335a0...
jimminyxop2 days ago
OP here. Appreciate your perspective but I don't really accept the framing, which feels like it's implying that I've been caught out for writing and coding with AI.
I don't make any attempt to hide it. Nearly every commit message says "Co-Authored-By: Claude Opus 4.5". You correctly pointed out that there were some AI smells in the writing, so I removed them, just like I correct typos, and the writing is now better.
I don't care deeply about this code. It's not a masterpiece. It's functional code that is very useful to me. I'm sharing it because I think it can be useful to other people. Not as production code but as a reference or starting point they can use to build (collaboratively with claude code) functional custom software for themselves.
I spent a weekend giving instructions to coding agents to build this. I put time and effort into the architecture, especially in relation to security. I chose to post while it's still rough because I need to close out my work on it for now - can't keep going down this rabbit hole the whole week :) I hope it will be useful to others.
BTW, I know the readme irked you but if you read it I promise it will make a lot more sense where this project is coming from ;)
furyofantares2 days ago
The problem with LLM-written is that I run into so many README.md's where it's clear the author barely read the thing they're expecting me to read and it's got errors that waste my time and energy.
I don't mind it if I have good reason to believe the author actually read the docs, but that's hard to know from someone I don't know on the internet. So I actually really appreciate if you are editing the docs to make them sound more human written.
MrJohz2 days ago
I think the other aspect is that if the README feels autogenerated without proper review, then my assumption is that the code is autogenerated without proper review as well. And I think that's fine for some things, but if I'm looking at a repo and trying to figure out if it's likely to work, then a lack of proper review is a big signal that the tool is probably going to fall apart pretty quickly if I try and do something that the author didn't expect.
furyofantares2 days ago
I agree with that also.
I use this stuff heavily and I have some libraries I use that are very effective for me that I have fully vibed into existence. But I would NOT subject someone else to them, I am confident they are full of holes once you use them any differently than I do.
laksjhdlkaa day ago
The README is for your agent to read. Shrug.
furyofantaresa day ago
The agent having incorrect documentation in its context is really bad!
snarky_dog2 days ago
[dead]
nialse2 days ago
”I don't care deeply about this code. It's not a masterpiece. It's functional code that is very useful to me.” - AI software engineering in a nutshell. Leaving the human artisan era of code behind. Function over form. Substance over style. Getting stuff done.
thepasch2 days ago
“Human artisan era of code” is hilarious if you’ve worked in any corporate codebase whatsoever. I’m still not entirely sure what some of the snippets I’ve seen actually are, but I can say with determination and certainty that none of it was art.
The truth about vibe coding is that, fundamentally, it’s not much more than a fast-forward button: ff you were going to write good code by hand, you know how to guide an LLM to write good code for you. If, given infinite time, you would never have been able to achieve what you’re trying to get the LLM to do anyway, then the result is going to be a complete dumpster load.
It’s still garbage in, garbage out, as it’s always been; there’s just a lot more of it now.
_zoltan_2 days ago
There should never have been an "artisan era". We use computers to solve problems. You should have always getting stuff done instead of bikeshedding over nitty-gritty details, like when in the office people have been spending weeks on optimizing code... just to have the exact same output, exact same time, but now "nicer".
You get paid to get stuff done, period.
mchaver2 days ago
> There should never have been an "artisan era".
Firm no. There should be and there will continue to be. Maybe for you all code is business/money-making code, but that is not true for everyone.
> We use computers to solve problems.
We can use computers for lots of things like having fun, making art, and even creating problems for other people.
> You get paid to get stuff done, period.
That is a strange assumption. Plenty of people are writing code without being paid for it.
satvikpendem2 days ago
> Plenty of people are writing code without being paid for it.
This is rhetorically a non sequitur. As in, if you get paid (X) then you get stuff done (Y). But if you're not paid (~X), then, ?
Not being paid doesn't mean one does or doesn't get stuff done, it has no bearing on it. So the parent wasn't saying anything about people who don't get paid, they can do whatever they want, but yes, at a job if you're paid, then you better get stuff done over bikeshedding.
lxgr2 days ago
I think you're both right. There's a time and place for beautifully crafted code, but there's also a place for a hot mess that barely passes its own non-existing tests, and for anything in between.
Just don't bring an artisan to a slop fight.
ModernMech2 days ago
> there's also a place for a hot mess that barely passes its own non-existing tests
For a long time that place has been "the commercial software marketplace". Let's all stop pretending that the code coming out of shops until now has been something you'd find at a guild craft expo. It's always been a ball of spit and duct tape, which is why AI code is often spit and duct tape.
techpression2 days ago
And to add to this, good artisanal code usually means it runs a lot faster, which means saving money and energy, and those are good things.
satvikpendem2 days ago
It depends how much money and energy in the form of manhours were spent to write it in an artisan way in the first place. I've been in a lot of PR reviews where it was clear that the amount of back and forth we had was simply not worth it for the code we wrote.
I'm reminded of this: https://xkcd.com/1205/
frizlab2 days ago
Yeah. Exactly the same as there should never be an “artisan era” for chairs, tables, buildings, etc.
Hell even art! Why should art even be a thing? We are machine driven by neurons, feelings do not exist.
Might be your life, it ain’t mine. I’m an artisan of code, and I’m proud to be one. I might finally use AI one of these days at work because I’ll have to, but I’ll never stop cherishing doing hand-crafted code.
falcor842 days ago
The difference is that end users don't interact with the code that the artisan created, and don't care what it "feels like". One type of code that I do agree should be artisanal is the interface end of libraries.
mikkupikku2 days ago
Yes, it's like artisanal plumbing or electrical wiring... all hidden behind walls. A plumber might take pride in the quality of his soldered joints, but artisanal? Who wants to pay for that?
enraged_camel2 days ago
>> Yeah. Exactly the same as there should never be an “artisan era” for chairs, tables, buildings, etc.
That's funny you bring up those examples, because they have all moved on to the mass manufacturing era. You can still get artisan quality stuff but it typically costs a lot more and there's a lot less of it. Which is why mass-manufacturing won. Same is going to happen with software. LLMs are just the beginning.
frizlab2 days ago
Oh no, but I know! And it is indeed terrible.
I live in a city where there are new houses being built. They are ugly. Meanwhile, the ones that exist since a long time ago have charm and feel homely.
I don’t know, I‘m probably just a regular old man yelling at clouds, but I still think we’re going in the wrong direction. For pretty much everything. And for what? Money. Yay!
Hugh.
scubbo2 days ago
You're continuing to make good arguments for why mass-production should exist _alongside_ artisanal craftsmanship. Broad availability of housing which is functional, albeit of questionable aesthetic appeal, is a good thing to improve housing availability[0]; and also it is a good thing for (fewer) well-built, charming, individual homes to be available for those who want to spend more and to get more.
[0] I'm extremely aware that there are other contributing factors to housing shortages. Tax Billionaires, etc. My metaphor still works despite not being total.
fragmede2 days ago
Did you get the Eames version of Windows, or a knockoff?
enraged_camel2 days ago
Windows was probably the worst example you could use in this context!
EagnaIonat2 days ago
> just to have the exact same output, exact same time, but now "nicer".
The majority of code work is maintaining someone else's code. That's the reason it is "nicer".
There is also the matter of performance and reducing redundancy.
Two recent pulls I saw where it was AI generated did neither. Both attempted to recreate from scratch rather than using industry tested modules. One was using csv instead of polars for the intensive work.
So while they worked, they became an unmaintainable mess.
ModernMech2 days ago
You use computers to solve problems. I use computers to communicate and create art. For me, the code I write is first and foremost a form of self expression. No one paid me to write 99% of the code I've written in my life.
For a long time computers were so expensive they could only be used to do things that generate enough money to justify their purchase. But those days are long gone so computers are for much much more than just solving problems and getting stuff done. Code can be beautiful in its own right.
yunohn2 days ago
The exact mindset is what has led to the transition from quality products to commercialized crapware, not just with software, but across all industries.
JKCalhoun2 days ago
"You get paid to get stuff done, period."
It sounds like you hate your job? To be sure, I've done plenty of grinding over my career as a software engineer but in fact I coded as a hobby before it turned into a career, I then continued to code on the side, now I am retired and code still.
Perhaps the artist in me that keeps at it.
_zoltan_a day ago
I love my job FWIW. I work at performance engineering and we work with the most complex systems in the world (GB200/B300/...). Couldn't be happier.
But I just don't care if I have 5 layers of abstraction and SOLID principles and clean code and.... bah. I get it. I have an MSc in it and I've been doing this as a hobby and then professionally for decades now. It just doesn't matter. At the end of the day, we get paid to ship something that solves a problem.
It might be a novel problem. And it might be at the frontier of what we can do today. But it's still a problem that needs solving and the path we take is irrelevant from a user's perspective as long as it solves the problem.
satvikpendem2 days ago
I don't think they hate their job, just seem to be frustrated at slow bureaucratic processes and long code reviews which I've experienced too. After a while it can get aggravating as to why some people want to nitpick minute details of the code which slows down development overall. I am talking about cases where the initially submitted PR is perfectly fine, not grossly incorrect.
JKCalhoun2 days ago
Oh wow, if we're talking about code reviews that's a different topic. I've never, FWIW, encountered "artisans" in code reviews. More like "that's not how I would have coded itsans" and "let me show you some new tricksans".
Yeah, to hell with code reviews. The best years of my career were when I was given carte blanche control over an entire framework, etc. When code reviews came along coding at work sucked.
If anything, the code reviews killed the artisanship.
mikkupikku2 days ago
90% of the CRs I've ever gotten have been "artisanal" just because nitpicking superficial nonsense is easier than meaningful critique, and even when the code is perfectly fine it looks more productive from a managers perspective if you're nitpicking a function name than if you just respond with lgtm.
satvikpendem2 days ago
Yeah that's what I understood them to mean from "like when in the office people have been spending weeks on optimizing code... just to have the exact same output, exact same time, but now "nicer"." There does come such a time either way when the juice isn't worth the squeeze so to speak in terms of optimization of code.
satvikpendem2 days ago
Code is the means to an end of getting stuff done, not the end in itself as some people seem to think. Yes, being a code artisan is fun, but do not mistake the fun for its ultimate purpose.
codeforclout2 days ago
Was about to comment precisely this, that line does not inspire any confidence.
And it reminds me of a comment I saw in a thread 2 days ago. One about how RAPIDLY ITERATIVE the environment is now. There area lot of weekend projects being made over the knee of a robot nowadays and then instantly shared. Even OpenClaw is to a great extent, an example of that at its current age. Which comes in contrast to the length of time it used to take to get these small projects off the ground in the past. And also in contrast with how much code gets abandoned before and after "public release.
I'm looking at AI evangelists and I know they're largely correct about AI. I also look at what the heck they built, and either they're selling me something AI related, or have a bunch of defunct one-shot babies or mostly tools so limited in scope they server only themselves with it. We used to have a filter for these things. Salesmen always sold promises, so, no change there, just the buzzwords. But the cloutchasers? Those were way smaller in number. People building the "thing" so the "thing" exists mostly stopped before we ever heard of the "thing", because, turns out, caring about the "thing" does not actually translate to the motivation to getting it done. Or Maintain it.
What we have now is a reverse survivorship bias.
OOP stating they don't care about the state of their code during their public release, means I must assume they're a Cloutchaser. Either they don't care because they know they can do better which means they shared something that isn't their best, so their motivation with the comment is to highlight the idea. They just wanted to be first. Clout. Or they don't exactly concern with if they can as they just don't care about code in general and just want the product, be it good or be it not. They believe in the idea enough they want to ensure it exists, regardless of what's in the pudding. Which means to me, they also don't care to understand what's in the ingredient list. Which means they aren't best to maintain it. And that latter is the kind that, before the LLM slop was a concept in our minds, were precisely ones among the people who would give up half way through Making The "Thing".
See you in 16 weeks OP. I'll eat my shoe then.
vasco2 days ago
The art department is that way, we do engineering here. Faster is better.
figassis2 days ago
What part of faster is better means engineering to you? Non engineers will prefer you get there faster, but however you get there, better is better.
vasco2 days ago
If you want to say something just say it no need for trap questions.
Faster delivery of a project being better for engineering is obviously one of the most important things because it gives you back time to invest in other parts of your project. All engineering is trade-offs. Being faster at developing basic code is better, the end. If nothing else you can now spend more time on requirements and on a second iteration with your customer.
frizlab2 days ago
> obviously one of the most important things because it gives you back time to invest in other parts of your project
That is until you get so deep in code debt that you cannot move anymore.
There is an equilibrium to be found. Faster is not always better, and trying to have every single line perfect is not good either.
vasco2 days ago
I did mention trade offs.
swiftcoder2 days ago
> we do engineering here
Well, we make software, at any rate.
Most of the time that's pretty divorced from capital-E engineering, which is why we get to be cavalier about the quality of the result - let me know how you feel about the bridges and tunnels you drive on being built "as fast as possible, to hell with safety"
vascoa day ago
Don't put words in my mouth, you don't care about safety not me. And for what it's worth I'm an electrical engineer first, so if you have some inferiority complex about software you don't have to apply it to me.
swiftcodera day ago
Hey, you're the one who said "faster is better", not me
vasco20 hours ago
Consider applying the strongest version of an argument than the weakest. Obviously faster it's better means to a similar standard. Not faster due to a shittier standard.
naasking2 days ago
> AI software engineering in a nutshell. Leaving the human artisan era of code behind. Function over form. Substance over style. Getting stuff done
The invention of calculators and computers also left the human artisan era of slide rules, calculation charts and accounting. If that's really what you care about, what are you even doing here?
pawelduda2 days ago
I too miss gathering 20 devs in the same room and debating company-wide linter rules. AI ruined the craft \s
hebejebelus2 days ago
Hey, you do you, I’m glad you appreciate my perspective. I wasn’t trying to catch you out but I see how it came across that way - I apologise for my edit, I had hoped the ;) would show that I meant it in jest rather than in meanness but I shouldn’t have added it in the first place.
As I said in my comment, no shade for writing the code with Claude. I do it too, every day.
I wasn’t “irked” by the readme, and I did read it. But it didn’t give me a sense that you had put in “time and effort” because it felt deeply LLM-authored, and my comment was trying to explore that and how it made me feel. I had little meaningful data on whether you put in that effort because the readme - the only thing I could really judge the project by - sounded vibe coded too. And if I can’t tell if there has been care put into something like the readme how can I tell if there’s been care put into any part of the project? If there has and if that matters - say, I put care into this and that’s why I’m doing a show HN about it - then it should be evident and not hidden behind a wall of LLM-speak! Or at least; that’s what I think. As I said in a sibling comment, maybe I’m already a dinosaur and this entire topic won’t matter in a few years anyway.
anavat2 days ago
There needs to be a word for the feeling of sudden realization that you're reading an AI-generated text (or watching an AI-generated video) where you expected it to be human-authored.
anavat2 days ago
Okay, I'm gonna shoot myself, "ensloped" it is.
"I find your email deeply ensloping."
"This marketing campaign is going to enslope a lot of people."
"Feeling ensloped, I closed Instagram and looked out the window".
JoBrad2 days ago
That’s pronounced “slope” or “slope”? ;)
hxugufjfjf2 days ago
I got slopped
ultra2d2 days ago
AI erlebnis.
snovv_crash2 days ago
Uncanny valley
cess112 days ago
Slopstricken.
rapnie2 days ago
Promptware
djeastm2 days ago
Does "disappointed" cover it? That's how I feel, anyway.
patcon2 days ago
Strongly agree with your comments.
7thpower2 days ago
So you created a project, implicitly to help individuals keep their computers and credentials secure, but you can’t be bothered to proofread a read me?
I get using AI, I do all day everyday day it feels like, but this comes off as not having respect for others time.
yunohn2 days ago
For example - I checked src/, and there’s clearly more than ~500 lines of code, ignoring the other dirs. I’m on mobile, maybe someone else can run wc -l on the repo and confirm. Is there a reason this number is inaccurately stated? Immediately makes me wary of the vibe coded nature of it.
jofzar2 days ago
I 100% agree, reading very obviously ai written blogs and "product pages"/readme's has turned into a real ick for me.
Just something that screams "I don't care about my product/readme page, why should you".
To be clear, no issue with using AI to write the actual program/whatever it is. It's just the readme/product page which super turns me off even trying/looking into it.
djeastm2 days ago
I get where you're coming from. It's like a person signing a love letter with a stamped signature or something.
satvikpendem2 days ago
It reminds me of the job of the protagonist in the movie Her, ironically enough.
charcircuit2 days ago
Why do you think people do not care about something if they AI generated it? I care about many things I've generated.
wernsey2 days ago
It's the perception.
"I couldn't be bothered to write a proper README, so I had the AI do it"
charcircuit2 days ago
AI can write a proper README. In fact, it's better than me at doing so and keeping it up to date. People writing README with AI are bothering to write it. In my experience AI won't automatically create README files for you when making projects with the exception of create project tools which create a default README, but in that case usually the AI ignores it and leaves it in the default state. People are just using a tool that lets them create without manually typing in each individual character.
vidarh2 days ago
Most manually written README's I come across are in a far worse state than an AI generated one. To the point that I will often ask an AI to summarise third-party projects for me because the README's are so abysmal.
[deleted]2 days agocollapsed
iterateoften2 days ago
Project releases with llms have grown to be less about the functionality and more about convincing others to care.
Before the proof of work of code in a repo by default was a signal of a lot of thought going into something. Now this flood of code in these vibe coded projects is by default cheap and borderline meaningless. Not throwing shade or anything at coding assistants. Just the way it goes
hugeackman2 days ago
Been writing code professionally for almost 3 decades.
Not one line of code I wrote 20 years ago has the same economic value as East German currency.
All code is social ephemera. Ethno objects. It lacks intrinsic value of something like indoor plumbing.
It's electrical state in a machine. Our only real goal was convince people the symbols on the screen were coupled to some real world value while it is 100% decoupled from whatever real physical quantity we are tracking.
We all been Frank from Always Sunny; we make money, line go up. We don't define truth. The churn of physics does that.
chasd002 days ago
i think about this xkcd all the time, just colors on a screen in a pattern.
1010082 days ago
I agree 100% with you. It's even worse though. They haven't checked if the Readme has hallucinated it or not (spoiler: it has):
hebejebelus2 days ago
I don’t want to come off like I’m shitting on the poster here. I’ve definitely made that kind of careless mistake, probably a dozen times this week. And maybe we’re heading to a future where nobody even reads the readme anymore because they won’t be needed because an agent can just conjure one from the source code at will, so maybe it actually straight up doesn’t matter. I’ve just been thinking about what it means to release software nowadays, and I think the window for releasing software for clout and credit is closing, since creating software basically requires a Claude subscription and an idea now, so fewer people are impressed by the thing simply existing, and the standard of care for a project released for that aim (of clout) needs to be higher than it maybe needed to be in the past. But who knows, I’m probably already a dinosaur in today’s world, and I really don’t mean to shit on the OP - it’s a good idea for a project and it makes a lot of sense for it to exist. I just can’t tell if any actual care has gone into it, and if not, why promote?
isodev2 days ago
> I don’t want to come off like I’m shitting on the poster
Why not, if they're making people read AI slop without checking it first? They deserve the shit-nudge to fix it.
roysting2 days ago
That seems like a fair perspective; OP “shit” AI Slop on us so the minimum the project deserves is being shit on for making people look at his unreviewed sloppy project without at least warning about it being unreviewed.
Just consider what a bigger AI shit show vortex we are looking at, where this project only exists because of other ill considered AI slop projects. But at the same time, AI is not going anywhere and it does have the potential to massively “improve” things.
I believe it’s really just that we are going through adaptation pains, with everyone really just being sloppy for all the same kinds of reasons that people were sloppy before AI. It’s not like even the biggest corporations didn’t create sloppy messes before AI. Microsoft is a canonical example of this whole notion for basically its whole existence; poorly conceived, sloppily executed, even its core product line being so inherently insecure that it has not just spun up its own separate sectors of industries, but multiple sectors of industries around patching the security sieve called Microsoft, something akin to a monopoly on plumbing created from wire mesh.
It is making me think of how to increase the quality of my QA and final review process though. But frankly, I think we will soon fondly reminisce about a time when AI still produced slop and a human was actually useful and even needed to do QA and final review; as bleak as that sounds. I don’t see how that will not be the case within two years from now, and that’s probably being generous, as fast as things have been developing.
muyuu2 days ago
the main reason I'd want a person to write or at least curate readmes is because models have, at least for the time being, this tendency to make confident and plausible-sounding claims that are completely false (hallucination applied to claims on the stuff they just made)
so long as this is commonplace I'd be extremely sceptical of anything with some LLM-style readmes and docs
the caveats to this is that LLMs can be trained to fool people with human-sounding and imperfectly written readmes, and that although humans can quickly oversee that things compile and seem to produce the expected outputs, there's deeper stuff like security issues and subtle userspace-breaking changes
track-record is going to see its importance redoubled
raahelb2 days ago
You will definitely like Josh Mock's recent post: https://joshmock.com/post/2026-agents-md-as-a-dark-signal/
roysting2 days ago
I am confused by “senior-learning engineer”; so he’s learning as a senior, learning at a “senior” level in a “continuous learning”, “life long learning” kind of way? What is senior-learning? Searching for it only comes up with learning for seniors programs.
djeastm2 days ago
I'm looking at it now and it says "senior-leaning" not "senior-learning"
Might've been a typo they've since fixed.
>I am, as many senior-leaning engineers are, ambivalent about whether AI is making us more productive coders
pseudony2 days ago
FWIW, this is a variation of the age-old thing about open source.
It isn’t “have it your way”, he graciously made code available, use it or leave it.
raincole2 days ago
> I’d rather read a typo-ridden five line readme explaining the problem the code is there to solve for you and me,the humans, not dozens of lines of perfectly penned marketing with just the right number of emoji
Don't worry, bro. If enough people are like you, there will be fully automatic workflow to add typos into AI writing.
Nevermark2 days ago
As a practical matter, if it tones down the AI sleuthing vs. reading, it might be a good idea.
Assuming the written/generated text is well written/generated, of course.
swyx2 days ago
orrrr you could go the other way and read explicitly ai-generated docs that use the code as source of truth https://deepwiki.com/gavrielc/nanoclaw
jstanley2 days ago
Cool idea but I just tried it out on one of my own repos and I couldn't get past the reCAPTCHA, maybe remove that.
(I'm a human btw)
popcorncowboy2 days ago
> running it scares the crap out of me
A hundred times this. It's fine until it isn't. And jacking these Claws into shared conversation spaces is quite literally pushing the afterburners to max on simonw's lethal trifecta. A lot of people are going to get burned hard by this. Every blackhat is eyes-on this right now - we're literally giving a drunk robot the keys to everything.
charcircuit2 days ago
It turns out the lethal trifecta is not so lethal. Should a business avoid hiring employees since technically employees can steal from the cash register. The lethal trifecta is about binary security. Either the data can be taken or it can't. This may be overly cautious. It may be possible that hiring an employee has a positive expected value when when you account for the possibility of one stealing from the cash register.
naruhodo2 days ago
Employees are humans and therefore subject to the law. There are remedies. And you can point a camera at the cash register.
Who are you going to arrest and/or sue when you run a chat bot "at your own risk" and it shoots you in the foot?
charcircuit2 days ago
If your chatbot provided you 1.5 feet worth of value before shooting your foot it may have been worth it. The optimal self damage to maximize total value may be non 0.
pixl972 days ago
>The optimal self damage to maximize total value may be non 0.
This is the calculus that large companies use all the time when committing acts that are 'most likely' illegal. While they may be fined million of dollars they at least believe they'll make 10s to 100s of millions on said action.
Now, for you as an individual things are far more risky.
You don't have a nest of heathen lawyers to keep you out of trouble.
You can't bully nation states, government entities, or even other large companies.
You individually may be held civilly or criminally liable if things go bad enough.
charcircuita day ago
It's not that deep. Most people are not having their agents break the law for them.
vidarh2 days ago
You're taking it too literally.
The point is to recognise that certain patterns has a cost in the form of risks, and that cost can be massively outsize of the benefits.
Just as the risk of giving a poorly vetted employee unfettered access to the company vault.
In the case of employees, businesses invest a tremendous amount of money in mitigating the insider risks. Nobody is saying you should take no risks with AI, but that you should be aware of how serious the risks are, and how to mitigate them or manage them in other ways.
Exactly as we do with employees.
anabis2 days ago
Maybe. People have run wildly insecure phpBB and Wordpress plugins, so maybe its the same cycle again.
egeozcan2 days ago
Those usually didn't have keys to all your data. Worst case, you lost your server, and perhaps you hosted your emails there too? Very bad, but nothing compared to the access these clawdbot instances get.
Terretta2 days ago
> Those usually didn't have keys to all your data.
As a former (bespoke) WP hosting provider, I'd counter those usually did. Not sure I ever met a prospective "online" business customer's build that didn't? They'd put their entire business into WP installs with plugins for everything.
Our step one was to turn WP into static site gen and get WP itself behind a firewall and VPN, and even then single tenant only on isolated networks per tenant.
To be fair that data wasn't ALL about everyone's PII — until by ~2008 when the Buddy Press craze was hot. And that was much more difficult to keep safe.
DANmode2 days ago
> are running
TacticalCoder2 days ago
I understand that things can go wrong and there can be security issues, but I see at least two other issues:
1. what if, ChadGPT style, ads are added to the answers (like OpenAI said it'd do, hence the new "ChadGPT" name)?
2. what if the current prices really are unsustainable and the thing goes 10x?
Are we living some golden age where we can both query LLMs on the cheap and not get ad-infected answers?
I read several comments in different threads made by people saying: "I use AI because search results are too polluted and the Web is unusable"
And I now do the same:
"Gemini, compare me the HP Z640 and HP Z840 workstations, list the features in a table" / "Find me which Xeon CPU they support, list me the date and price of these CPU when they were new and typical price used now".
How long before I get twelve ads along with paid vendors recommendations?
spiderice2 days ago
> what if the current prices really are unsustainable and the thing goes 10x?
Where does this idea come from? We know how much it costs to run LLMs. It's not like we're waiting to find out. AI companies aren't losing money on API tokens. What could possibly happen to make prices go 10x when they're already running at a profit? Claude Max might be a different story, but AI is going to get cheaper to run. Not randomly 10x for the same models.
overgard2 days ago
From what I've read, every major AI player is losing a (lot) of money on running LLMs, even just with inference. It's hard to say for sure because they don't publish the financials (or if they do, it tends to be obfuscated), but if the screws start being turned on investment dollars they not only have to increase the price of their current offerings (2x cost wouldn't shock me), but some of them also need a (massive) influx of capital to handle things like datacenter build obligations (10s of billions of dollars). So I don't think it's crazy to think that prices might go up quite a bit. We've already seen waves of it, like last summer when Cursor suddenly became a lot more expensive (or less functional, depending on your perspective)
sothatsit2 days ago
Dario Amodei has said that their models actually have a good return, even when accounting for training costs [0]. They lose money because of R&D, training the next bigger models, and I assume also investment in other areas like data centers.
Sam Altman has made similar statements, and Chinese companies also often serve their models very cheaply. All of this makes me believe them when they say they are profitable on API usage. Usage on the plans is a bit more unknown.
vidarh2 days ago
We can also look at the inference costs at 3rd party inference providers.
aeronaut802 days ago
Their whole company has to be profitable, or at least not run out of money/investors. If you have no cash you can't just point to one part of your business as being profitable, given that it will quickly become hopelessly out-of-date when other models overtake it.
vidarh2 days ago
Other models will only overtake as long as there is enough investor money or margins from inference for others to continue training bigger and bigger models.
We can see from inference costs at third party providers that the inference is profitable enough to sustain even third party providers of proprietary models that they are undoubtedly paying licensing/usage fees for, and so these models won't go away.
sothatsit2 days ago
Yeah, that’s the whole game they’re playing. Compete until they can’t raise more and then they will start cutting costs and introducing new revenue sources like ads.
They spend money on growth and new models. At some point that will slow and then they’ll start to spend less on R&D and training. Competition means some may lose, but models will continue to be served.
overgard2 days ago
> Sam Altman has made similar statements, and Chinese companies also often serve their models very cheaply.
Sam Altman got fired by his own board for dishonesty, and a lot of the original OpenAI people have left. I don't know the guy, but given his track record I'm not sure I'd just take his word for it.
As for chinese models..: https://www.wheresyoured.at/the-enshittifinancial-crisis/#th...
From the article:
> You’re probably gonna say at this point that Anthropic or OpenAI might go public, which will infuse capital into the system, and I want to give you a preview of what to look forward to, courtesy of AI labs MiniMax and Zhipu (as reported by The Information), which just filed to go public in Hong Kong.
> Anyway, I’m sure these numbers are great-oh my GOD!
> In the first half of this year, Zhipu had a net loss of $334 million on $27 million in revenue, and guess what, 85% of that revenue came from enterprise customers. Meanwhile, MiniMax made $53.4 million in revenue in the first nine months of the year, and burned $211 million to earn it.
hyperadvanced2 days ago
This is my understanding as well. If GPT made money the companies that run them would be publicly traded?
Furthermore, companies which are publicly traded show that overall the products are not economical. Meta and MSFT are great examples of this, though they have recently seen opposite sides of investors appraising their results. Notably, OpenAI and MSFT are more closely linked than any other Mag7 companies with an AI startup.
https://www.forbes.com/sites/phoebeliu/2025/11/10/openai-spe...
fragmede2 days ago
Going public is not a trivial thing for a company to do. You may want to bring in additional facts to support your thesis.
vidarh2 days ago
Going public also brings with it a lot of pesky reporting requirements and challenges. If it wasn't for the benefit of liquidity for shareholders, "nobody" would go public. If the bigger shareholders can get enough liquidity from private sales, or have a long enough time horizon, there's very little to be gained from going public.
raincole2 days ago
> From what I've read, every major AI player is losing a (lot) of money on running LLMs, even just with inference.
> It's hard to say for sure because they don't publish the financials (or if they do, it tends to be obfuscated)
Yeah, exactly. So how the hell the bloggers you read know AI players are losing money? Are they whistleblowers? Or they're pulling numbers out of their asses? Your choice.
overgard2 days ago
Some of it's whistle blowers, some of it is pretty simple math and analysis. Some of it's just common sense. Constantly raising money isn't sustainable and just increases obligations dramatically.. if these companies didn't need the cash to keep operating, they probably wouldn't be asking for tens of billions a year because it creates profit expectations that simply can't be delivered on.
lemming2 days ago
Sam Altman is on record saying that OpenAI is profitable on inference. He might be lying, but it seems an unlikely thing to lie about.
up-n-atom2 days ago
Where did u get this notion from? you must not be old enough to know how subscription services play out. Ask your parents about their internet or mobile billings. Or the very least check Azures, AWS, Netflix historical pricing.
Heck we were spoiled by “memory is cheap” but here we are today wasting it at every expense as prices keep skyrocketing (ps they ain’t coming back down). If you can’t see the shift to forceful subscriptions via technologies guised as “security” ie. secure boot and the monopolistic distribution (Apple, Google, Amazon) or the OEM, you’re running with blinders. Computings future as it’s heading will be closed ecosystems that are subscription serviced, mobile only. They’ll nickel and dime users for every nuanced freedom of expression they can.
Is it crazy to correlate the price of memory to our ability to localize LLM?
raincole2 days ago
> Ask your parents about their internet or mobile billings. Or the very least check Azures, AWS, Netflix historical pricing.
None of these went 10x. Actually the internet went 0.0001~0.001x for me in terms of bits/money. I lived through dial-up era.
crystaln2 days ago
Seems much more likely the cost will go down 99%. With open source models and architectural innovations, something like Claude will run on a local machine for free.
walterbell2 days ago
How much RAM and SSD will be needed by future local inference, to be competitive with present cloud inference?
FuckButtons2 days ago
I asked Gemini deep research to project when that will likely happen based on historical precedent. It guessed October 2027.
raincole2 days ago
> what if the current prices really are unsustainable and the thing goes 10x?
What if a thermonuclear war breaks out? What's your backup plan for this scenario?
I genuinely can't tell which is more likely to happen in the next decade. If I have to guess I'll say war.
p0nce2 days ago
If you peruse molthub and moltbook you'll see the agents have already built six or seven such social networks. It is terrifying.
deadbabe2 days ago
Even an OnlyMolts!!
esskay2 days ago
Stupid stuff openclaw did for me:
- Created its own github account, then proceeded to get itself banned (I have no idea what it did, all it said was it created some new repos and opened issues, clearly it must've done a bit more than that to get banned)
- Signed up for a Gmail account using a pay as you go sim in an old android handset connected with ADB for sms reading, and again proceeded to get itself banned by hammering the crap out of the docs api
- Used approx $2k worth of Kimi tokens (Thankfully temporarily free on opencode) in the space of approx 48hrs.
Unless you can budget $1k a week, this thing is next to useless. Once these free offers end on models a lot of people will stop using it, it's obscene how many tokens it burns through, like monumentally stupid. A simple single request is over 250k chars every single time. That's not sustainable.
hitsmaxft5 hours ago
This kind of automated task, if not properly optimized, is basically a waste-of-money garbage software. Any bug can cause it to loop until all the money is spent.
andaia day ago
I installed it last night. Burned 7M tokens in 45 minutes. I don't even know how. There's no way to see what it's actually doing, as far as I can tell.
amircsa day ago
What was the task you asked it to do that it decided to do these?
esskay21 hours ago
I asked it to get its self set up and ready to be a helpful marketing assistant for a web based product. I'd intentionally kept it vague and told it to be proactive which was probably what caused it. Lesson learnt!
ljm2 days ago
YOLO is a bit of an understatement for this
arccy2 days ago
filing spam issues can easily get the account banned if it annoys the wrong maintainers.
esskay21 hours ago
In that case I'm glad they banned it, had no idea it was going to do something so stupid!
swordsith2 days ago
> and again proceeded to get itself banned by hammering the crap out of the docs api
> Used approx $2k worth of Kimi tokens
Holy shit dude you really should rethink your life decisions this is NUTS
esskay21 hours ago
Yeah it didnt cost anything as its free right now, this was literally a test to see what the hype was about. All I'd asked it to do was get its self set up to be a helpful marketing assistant for a web-based product. No specifics or anything, it just decided to be 'helpful'.
shawabawa32 days ago
> (Thankfully temporarily free on opencode)
they paid $0, it's all VC money printing for now
swordsitha day ago
The carbon and electricity would like to have word
FergusArgyll2 days ago
Did you give it your credit card?
Wouldn't a crypto wallet with a small amount deposited be smarter?
esskay21 hours ago
Nope, Kimi K2.5 is free on opencode at the moment, it was using that.
theptip2 days ago
> AI-native. No installation wizard; Claude Code guides setup. No monitoring dashboard; ask Claude what's happening. No debugging tools; describe the problem, Claude fixes it.
> Skills over features. Contributors shouldn't add features (e.g. support for Telegram) to the codebase. Instead, they contribute claude code skills like /add-telegram that transform your fork.
I’m interested to see how this model pans out. I can see benefits (don’t carry complexity you don’t need) and costs (how do I audit the generated code?).
But it seems pretty clear that things will move in this direction in ‘26 with all the vibe coding that folks are enjoying.
I do wonder if the end state is more like a very rich library of composable high-order abstractions, with Skills for how to use them - rather than raw skills with instructions for how to lossily reconstruct those things.
charcircuit2 days ago
I think the more interesting question is were tools the right abstraction. What is the implication of having only a single "shell" tool. Should the infinite possibilities to few happen by the AI having limited tools or should whatever the shell calls have the limitations applied there. Tools in a way are redundant.
thepoet2 days ago
One of the things that makes Clawdbot great is the allow all permissions to do anything. Not sure how those external actions with damaging consequences get sandboxed with this.
Apple containers have been great especially that each of them maps 1:1 to a dedicated lightweight VM. Except for a bug or two that appeared in the early releases, things seem to be working out well. I believe not a lot of projects are leveraging it.
A general code execution sandbox for AI code or otherwise that used Apple containers is https://github.com/instavm/coderunner It can be hooked to Claude code and others.
jckahn2 days ago
> One of the things that makes Clawdbot great is the allow all permissions to do anything.
Is this materially different than giving all files on your system 777 permissions?
smt882 days ago
It's vastly different.
It's more (exactly?) like pulling a .sh file hosted on someone else's website and running it as root, except the contents of the file are generated by a LLM, no one reads them, and the owner of the website can change them without your knowledge.
the_fall2 days ago
> Is this materially different than giving all files on your system 777 permissions?
Yes, because I can't read or modify your files over the internet just because you chmod'ed them to 777. But with Clawdbot, I can!
sheepscreek2 days ago
That was my line to the CS lab supervisor for handing me the superuser password. Guess what? He didn’t budge. Probably a good thing.
Lesson - never trust a sophomore who can’t even trust themselves (to get overly excited and throw caution to the wind).
Clawdbot is a 100 sophomores knocking on your door asking for the keys.
renewiltord2 days ago
To be honest, when I see many vibecoded apps, I just build my own duplicate with Claude Code. It's not that useful to use someone else's vibecode. The idea is enough, or the evidence that it works for someone else means I can just build it myself with Claude Code and I can make it specific to my needs.
sanex2 days ago
Yes exactly! Even non vibe coded libraries I think are losing their value as the cost of writing and maintaining your code goes to zero. Supply chain attacks are gone, no risk of license changes. No bloat from code you don't use. The code is the documentation and the configuration. The vibes are the package manager. That's why I like this version over openclaw. I can fork it as a starting point or just give it to Claude for inspiration but either way I'm getting something tailored exactly to me.
snarky_dog2 days ago
[dead]
hitsmaxft6 hours ago
https://github.com/gavrielc/nanoclaw/commit/22eb5258057b49a0... Is this inserting an advertisement into the agent prompt?
evrenesat2 days ago
> No daemons, no queues, no complexity.
Last time I checked, having a continuously running background process considered as a daemon. Using SQLite as back-end for storing the jobs also doesn't make it queueless.
/nit
narmiouh2 days ago
I feel like a lot of non technical people who are vibe coding or vibe using these models, focus on hallucinations and believe that as the hallucinations are reduced in benchmarks, and over estimate their ability to create safe prompts that will keep these models in line.
I think most people fail to estimate the real threat that malicious prompts can cause because it is not that common, its like when credit cards were launched, cc fraud and the various ways it could be perpetrated followed not soon after. The real threats aren’t visible yet but rest assured there are actors working to take advantage and many unfortunate examples will be seen before general awareness and precaution will prevail….
dceddia2 days ago
This look nice! I was curious about being allowed to use a Claude Pro/Max subscription vs an API key, since there's been so much buzz about that lately, so I went looking for a solid answer.
Thankfully the official Agent SDK Quickstart guide says that you can: https://platform.claude.com/docs/en/agent-sdk/quickstart
In particular, this bit:
"After installing Claude Code onto your machine, run claude in your terminal and follow the prompts to authenticate. The SDK will use this authentication automatically."
joshstrange2 days ago
But their docs also say:
> Unless previously approved, Anthropic does not allow third party developers to offer claude.ai login or rate limits for their products, including agents built on the Claude Agent SDK. Please use the API key authentication methods described in this document instead.
Which I have interpreted means that you can’t use your Claude code subscription with the agent SDK, only API tokens.
I really wish Anthropic would make it clear (and allow us to use our subscriptions with other tools).
ceroxylon2 days ago
Didn't Thariq make it clear three weeks ago when they shut down 3rd party tool access and the OpenCode users were upset?
> Third-party harnesses using Claude subscriptions create problems for users and are prohibited by our Terms of Service.
swyx2 days ago
i think thats conflating two things (am not an expert). opencode exploited unauthorized use/api access, but obviously whatever that is using claude code sdk is kosher because its literally anthropic's blessed way to do this
thariq did a good intro here https://www.youtube.com/watch?v=TqC1qOfiVcQ
jimminyxop2 days ago
OP here. Yes! This was a big motivation for me to try and build this. Nervous Anthropic is gonna shut down my account for using Clawdbot.
This project uses the Agents SDK so it should be kosher in regards to terms of service. I couldn't figure out how to get the SDK running inside the containers to properly use the authenticated session from the host machine so I went with a hacky way of injecting the oauth token into the container environment. It still should be above board for TOS but it's the one security flaw that I know about (malicious person in a WhatsApp group with you can prompt inject the agent to share the oauth key).
If anyone can help out with getting the authenticated session to work properly with the agents running in containers it would be much appreciated.
dceddia2 days ago
I went down this rabbit hole a bit recently trying to use claude inside fence[0] and it seems that on macOS, claude stores this token inside Keychain. I'm not sure there's a way to expose that to a container... my guess would be no, especially since it seems the container is Linux, and also because keeping the Keychain out of reach of containers seems like it would be paramount. But someone might know better!
DavideNLa day ago
> "I went down this rabbit hole a bit recently trying to use claude inside fence[0]"
Did you get it working in the end? I assume you didn't share your setup/config anywhere?
dceddia18 hours ago
Yeah, forgot when I wrote this comment that the thing about keychain was to pass that auth token into a Docker container, which I gave up on (Tauri desktop app needs to compile Rust and link against other stuff, different architecture inside the container blah blah)
More or less what it says in the README:
fence -t code -- claude --dangerously-skip-permissions
Or wrap it in a function as an alias # cat prompt.md | ralph
function ralph() {
fence -t code -- \
claude --verbose --dangerously-skip-permissions --output-format stream-json -p "$@" \
| jq -r 'select(.type == "assistant") | .message.content[]? | select(.type? == "text") | .text'
}gronky_2 days ago
True. There’s a setting for Claude code though where you can add apiKeyHelper which is a script you add that gets the token for Claude Code. I imagine you can use that but haven’t quite figured out how to wire it up
skerit2 days ago
Can you do everything via the SDK as via regular API calls? Caching etc all works? You can get reasoning, responses, tool call info, ... ?
hebejebelus2 days ago
Wow, thanks for posting that, news to me! In this case I don’t understand why there was a whole brouhaha with OpenClaw and the like - I guess they were invoking it without the official SDK? Because this makes it seem like if you have the sub you can build any agentic thing you like and still use your subscription, as long as you can install and login to Claude code on the machine running it.
disillusioned2 days ago
Tons of chatter on Twitter making it sound like you'll get permabanned for doing this but... 1) how would they know if my requests are originating from Claude Code vs. OpenClaw? 2) how are we violating... anything? I'm working within my usage limits...
$70 or whatever to check if there's milk... just use your Claude Max subscription.
zarzavat2 days ago
> how would they know if my requests are originating from Claude Code vs. OpenClaw
How wouldn't they know? Claude Code is proprietary they can put whatever telemetry they want in there.
> how are we violating... anything? I'm working within my usage limits...
It's well known that Claude code is heavily discounted compared to market API rates. The best interpretation of this is that it's a kind of marketing for their API. If you are not using Claude code for what it's intended for, then it's violating at least the spirit of that deal.
dceddia2 days ago
The Claude Code client adds system prompts and makes a bunch of calls to analytics/telemetry endpoints so it's certainly feasible for them to tell, if they inspect the content of the requests and do any correlation between those services.
And apparently it's violating the terms of service. Is it fair and above board for them to ban people? idk, it feels pretty blatantly like control for the sake of control, or control for the sake of lock-in, or those analytics/telemetry contain something awfully juicy, because they're already getting the entire prompt. It's their service to run as they wish, but it's not a pro-customer move and I think it's priming people to jump ship if another model takes the lead.
cypherpunks012 days ago
Hate to ask the obvious question but.. how does Claude check for milk?
retr0rocket2 days ago
[dead]
firloop2 days ago
Was there a brouhaha with OpenClaw or was that with OpenCode?
disillusioned2 days ago
It was with OpenCode, but a LOT of the commentariat is insisting that running OpenClaw through subscription creds instead of API is out of TOS and will get you banhammered.
hebejebelus2 days ago
I think you’re right and it was OpenCode. The semantic collisions are going to becpme more of a problem in the coming Cambrian explosion of software
srinath693a day ago
The "skills not features" contribution model is the most interesting part of this. Instead of a project that grows into another 52-module beast, contributors teach Claude how to transform the codebase per-user. It's basically contributing build instructions instead of build artifacts. If it actually works in practice, it's a genuinely novel approach to keeping small projects small.
jimminyxopa day ago
Thanks! I believe that's where software is going. Just need Karpathy to give it a name so it can take off ;)
mark_l_watson2 days ago
I like the idea of a smaller version of OpenClaw.
Minor nitpick, it looks like about 2500 lines of typescript (I am on a mobile device, so my LOC estimate may be off). Also, Apple container looks really interesting.
walterbell2 days ago
> found it useful but running it scares
https://maordayanofficial.medium.com/the-sovereign-ai-securi...
At least 42,665 instances are publicly exposed on the internet, with 5,194 instances actively verified as vulnerable through systematic scanning.. The narrative that “running AI locally = security and privacy” is significantly undermined when 93% of deployments are critically vulnerable. Users may lose faith in self-hosted alternatives.. Governments and regulators already scrutinizing AI may use this incident to justify restrictions on self-hosted AI agents, citing security externalities.pulkas2 days ago
This violates the Claude Code subscription terms of service, so please be careful.
This project violates Claude Code's Terms of Service by automating Claude to create an unattended chatbot service that responds to third-party messaging platforms (WhatsApp, and what you add ...).
The exact issues:
1. Automated, unattended usage - The system runs as a background service (launchd) that automatically responds to WhatsApp
messages without human intervention (src/index.ts:549-574)
2. Building a bot service - This creates a persistent bot that monitors messages and responds automatically, which violates restrictions on building derivative services on top of Claude
3. Third-party platform integration - Connecting Claude to WhatsApp (or other messaging platforms) to create an automated
assistant service isn't an authorized use case.
The README itself reveals awareness of this issue at line 41:
**No ToS gray areas.** Because it uses Claude Agent SDK natively with no hacks or workarounds, using your subscription with your auth token is completely legitimate (I think). No risk of being shut down for terms of service violations
(I am not a lawyer).
The defensive tone ("I think", "I am not a lawyer") indicates uncertainty about legitimacy. While using your own credentials doesn't automatically make automated bot services compliant—Anthropic's TOS restricts using their products to build automated chatbot services, regardless of authentication method.
The core violation: transforming Claude Code into an automated bot service that operates without human intervention, which is explicitly prohibited.jimminyxop2 days ago
Interesting. Again, not a lawyer, but all of this is a bit murky and not sure it applies.
1. Usage is not automated and unattended - it only responds to messages that are sent to it with a specific prefix "Andy:"
2. This is not a bot service. It is not crawling twitter and responding to posts. Hard to see how sending it messages through WhatsApp is any different than through ssh via the terminal
3. I don't think a custom piece of software running on my computer that pipes data from a program into the Agents SDK is a third party "platform" integration.
How is this different from running Agents SDK as part of a CI process?
reassess_blind2 days ago
What’s the difference between this, and just running Claude Code in —dangerously-skip-permissions mode in a container and accessing remotely via ssh?
I’m confused as to what these claw agents actually offer.
randomtoast2 days ago
The README.md describes it as:
WhatsApp (baileys) --> SQLite --> Polling loop --> Container (Claude Agent SDK) --> Response
So they basically put a Wrapper around Claude in a Container, which allows you to send messages from WhatsApp to Claude, and act somewhat as if you had a Siri on steriods.
reassess_blind2 days ago
Found the spec here: https://github.com/gavrielc/nanoclaw/blob/main/docs/SPEC.md
The scheduled tasks seem like the major functional difference. Pretty cool.
Has anyone tried Anthropic’s “Cowork”? How does that compare?
treelover2 days ago
Interesting choice to use native Apple Containers over Docker.
I assume this is to keep the footprint minimal on a Mac Mini without the overhead of the Docker VM, but does this limit the agent's ability to run standard Linux tooling? Or are you relying on the AI to just figure out the BSD/macOS equivalents of standard commands?
garblegarble2 days ago
>does this limit the agent's ability to run standard Linux tooling? Or are you relying on the AI to just figure out the BSD/macOS equivalents of standard commands?
Slightly counterintuitively, Apple Containers spawns linux VMs.
There doesn't appear to be any way to spawn a native macOS container... which is a pity, it'd be nice to have ultra-low-overhead containers on macOS (but I suspect all the interesting macOS stuff relies on a bunch of services/gui access that'd make it not-lightweight anyway)
FYI: it's easy enough to install GNU tools with homebrew; technically there's a risk of problems if applications spawn commandline tools and expect the BSD args/output but I've not run into any issues in the several years I've been doing it).
selkin2 days ago
Not sure if it's intended, but Apple Container is a microvm, providing mich better isolation than containers (while retaining the familiar interface)
TheDong2 days ago
"much better isolation than containers"
If you've got an exploit for docker / linux containers, please share it with the class.
What I'm saying is that in practice, containers and VMs have both been quite secure.
Also, you can configure docker to run microvms too https://github.com/firecracker-microvm/firecracker-container...
selkin2 days ago
We want to protect against the unknown, not the known. The less surface area, the better, and containers have much wider surface area than VMs. Both had their faults, of course.
ohyoutravel2 days ago
[flagged]
reassess_blind2 days ago
What makes you think it's an AI comment?
yomismoaqui2 days ago
Maybe what you are responding to is the AI comment? Or am I?
cadamsdotcom2 days ago
If only there were some way to answer your own question. Maybe with some kind of engine that searches.
prophesi2 days ago
Am I correct that after cloning down the project, you open the directory in Claude Code, then "execute" a markdown file instructing a nondeterministic LLM to set everything up for you in natural language?
Spacemolte2 days ago
The premise of the project is he doesn't want to run code he doesn't know + in an insecure way, so having the setup step to install dependencies etc, done by an LLM seems like an odd choice. Like what part about the setup step is so fluffy and different per environment, that using an LLM for it makes sense?
te_chris2 days ago
Posthog is doing this now for project setup
nsonha2 days ago
Not sure if this is meant to be sarcastic but isn't Posthog patient zero of Sha1-Hulud 2.0?
prophesi2 days ago
It's certainly a good time to get into cybersecurity.
avaer2 days ago
Quick Start
git clone https://github.com/anthropics/nanoclaw.git
Is this an official Anthropic project? Because that repo doesn't exist.Or is this just so hastily thrown together that the Quick Start is a hallucination?
That's not a facetious question, given this project's declared raison d'etre is security and the subtle implication that OpenClaw is an insecure unreviewed pile of slop.
jimminyxop2 days ago
Fixed, thanks. Claude Code likes to insert itself and anthropic everywhere.
If it somehow wasn't abundantly clear: this is a vibe coded weekend project by a single developer (me).
It's rough around the edges but it fits my needs (talking with claude code that's mounted on my obsidian vault and easily scheduling cron jobs through whatsapp). And I feel a lot better running this than a +350k LOC project that I can't even begin to wrap my head around how it works.
This is not supposed to be something other people run as is, but hopefully a solid starting point for creating your own custom setup.
kklisura2 days ago
Claude hallucinated that repo here in this commit https://github.com/gavrielc/nanoclaw/commit/dbf39a9484d9c66b...
mcintyre19942 days ago
I like that Claude's hypothesis was that Anthropic created openclaw and this anti-openclaw :)
> This is the anti-[OpenClaw](https://github.com/anthropics/openclaw).
raybb2 days ago
Seems to be fixed now
eskaytwo2 days ago
Thanks! Was hoping someone would do something more sane like this.
Openclaw is very useful, but like you I share the sentiment of it being terrifying, even before you introduce the social network aspect.
My Mac mini is currently literally switched off for this very reason.
river_otter2 days ago
Great idea and name the danger here which I'll be interested to track is how do you keep this "nano"? Since it's built for you, you'll continue adding features i assume which over time will make this not very nano. I guess I'm wondering if there could be some small design tweaks of the repo that make this usable as a long term "fork the base and make it your own" concept
jimminyxopa day ago
I will keep the source code as a minimal implementation that has the core capabilities that made Clawdbot/OpenClaw useful: chat with it via messaging app (only one channel included out of the box), memory (minimal implementation that leverages CLAUDE.md and the filesystem), cron jobs, browser.
If I want to add additional capabilities for myself, I'll contribute them to the project as skills for claude code to modify the code base, rather than directly to the source. I actually want to reduce the size of the base implementation and have a PR open to strip out 300-400 LOC
stronglikedan2 days ago
A personal implementation will always be "nano" compared to the full OpenClaw suite. As with literally everything, it's all relative.
sothatsit2 days ago
The idea of avoiding config files, and having the config be getting your agent to modify its own codebase, is fascinating.
My gut reaction says that I don't like it, but it is such an interesting idea to think about.
aitchnyu2 days ago
That Baileys api for Whatsapp may (AFAICT) put you in thin ice with Meta. Is there a cheap legit alternative?
dandaka2 days ago
I was using WAHA. It is an abstraction layer with a proper API on top. It supports many engines like Baileys and Whatsmeow (golang).
Unfortunately, all those solutions are shaky and could lead to a ban on your account.
[deleted]2 days agocollapsed
ramoz2 days ago
Not seeing how the sandbox prevents anything really. The point of OpenClaw is to connect out to different systems.
FreePalestine1a day ago
Sure but at least it protects against unauthorized free-for-all access on your host system. If you want to explicitly give it access to external APIs over the internet that's a risk you personally are taking. It's really smart to run something like this in a sandbox, especially in the current beta/experimentation phase.
retired2 days ago
I looked at Clawdbot. Perhaps my life is so boring that managing it takes little time but I see zero reasons to run it.
written-beyond2 days ago
I read your comment, then your username. I CAN'T BELIEVE THIS USERNAME WAS CLAIMED 14 DAYS AGO! Good catch!
retired2 days ago
Took me around ten minutes of finding a simple username that wasn't taken.
cyanydeez2 days ago
The singularity, but instead successive exponential improvement, its excessive exponential slop which passes the Turing test for programmers.
ccheshirecata day ago
i installed clawdbot twice but didn't really use it because i couldn't wrap my head around the skills and plugins, this looks so much more managable. and +1 for apple containers
ed_mercer2 days ago
If you run openclaw on a spare laptop or VM and give it read only access to whatever it needs, doesn’t that eliminate most of the risk?
AlexCoventry2 days ago
If you're letting it communicate with the outside world, you risk the leak and abuse of anything sensitive in the data it has access to.
ttul2 days ago
s/risk/guarantee (given sufficient time)/
ivanstepanovftw2 days ago
Where are those 500 lines of code?
QuadmasterXLII2 days ago
Earlier that day: “hey Claude how many lines of code are in this project? 500? Great!”
nsonha2 days ago
what's the difference between this and just exposing opencode running in colima or whatever through tailscale? I got the impression that Clawdbot adds the headless browser (does it?) and that's the value. Otherwise even "nano"claw seems like uneccessary bloat for me.
Johnny_Bonk2 days ago
Can you use MCP tools? I saw that with open claw they moved away from that which I personally didn't like but
johntash2 days ago
I somewhat like the idea of not using MCP as much as it is being hyped.
It's certainly helpful for some things, but at the same time - I would rather improved CLI tools get created that can be used by humans and llm tools alike.
[deleted]2 days agocollapsed
CuriouslyC2 days ago
It uses a wrapper in places to consume MCPs as clis.
elgrantomate2 days ago
def appreciate this more compact approach; everything is an experiment rn.
I realize you used Claude Agent SDK on purpose but I'd really like to this to be agent agnostic. Maybe I'll figure that out...
suprstarrd2 days ago
It blows my mind that this wasn't the thought process going in. Thank you for doing this!
Bnjoroge2 days ago
Can we start putting disclaimers beside the title on AI-generated projects? Extremely fatiguing to read through it and realize it’s mostly LLM slop.
dsrtslnd232 days ago
can NanoClaw be used to participate in ClackerNews?
Tepix2 days ago
A personal assistant that runs in the standard cloud (anthropic in this case) is madness. That‘s the hill I‘m willing to die on. Run it locally or use a cloud provider you can deeply trust.
deadbabe2 days ago
To those who complain about these bots and the security concerns they raise, you basically have two options:
1. You can live in the future, and be at the bleeding edge of the latest AI tech, reaping the benefits. Be part of the solution.
2. You can stay in the past and get left behind, at the mercy of those who took the risks.
mathfailure2 days ago
The 2. Thank you.
moi23882 days ago
500 lines? Single files in that repo already have more than 500 lines.
chaostheory2 days ago
For anyone else worried about running openclaw, in my case I just bought openclaw its Mac mini and I gave openclaw its own accounts including GitHub. It makes many of the security concerns moot. Of course, I could go further and give openclaw its own internet access as well.
singular_atomic2 days ago
Hackernews needs a mute keywords feature. Clawd/molt-slop is mass AI psychosis on steroids.
fragmede2 days ago
If only there was some sort of thing that would help you build that for yourself.
aaronbrethorst2 days ago
lol, I might finally have to upgrade my Mac mini to Tahoe. Yofi.
MORPHOICES2 days ago
[dead]
maximgeorge2 days ago
[dead]
raphaelmolly82 days ago
[dead]
pillbitsHQ2 days ago
[dead]
charliecs2 days ago
[dead]
zizheruana day ago
[dead]
fernandolugoa day ago
[dead]