clarionbell9 hours ago
Anyone with a decent grasp of how this technology works, and a healthy inclination to skepticism, was not awed by Moltbook.
Putting aside how incredibly easy it is to set up an agent, or several, to create impressive looking discussion there, simply by putting the right story hooks in their prompts. The whole thing is a security nightmare.
People are setting agents up, giving them access to secrets, payment details, keys to the kingdom. Then they hook them to the internet, plugging in services and tools, with no vetting or accountability. And since that is not enough, now the put them in roleplaying sandbox, because that's what this is, and let them run wild.
Prompt injections are hilariously simple. I'd say the most difficult part is to find a target that can actually deliver some value. Moltbook largely solved this problem, because these agents are relatively likely to have access to valuable things, and now you can hit many of them, at the same time.
I won't even go into how wasteful this whole, social media for agents, thing is.
In general, bots writing each other on mock reddit, isn't something the loose sleep over. The moment agents start sharing their embeddings, not just generated tokens online, that's the point when we should consider worrying.
manugo49 hours ago
Karpathy seemed pretty awed though
clarionbell9 hours ago
He would be among those who lack "healthy inclination to skepticism" in my book. I do not doubt his brilliance. Personally, I think he is more intelligent than I am.
But, I do have a distinct feeling that his enthusiasm can overwhelm his critical faculties. Still, that isn't exactly rare in our circles.
lawn4 minutes ago
Intelligent people are very good at deceiving themselves.
Verdex3 hours ago
I think many serious endeavors would benefit from including a magician.
Intelligent experts fail time and again because while they are experts, they don't know a lot about lying to people.
The magician is an expert in lying to people and directing their attention to where they want it and away from where they don't.
If you have an expert telling you, "wow this is really amazing, I can't believe that they solved this impossible technical problem," then maybe get a magician in the room to see what they think about it before buying the hype.
runlaszlorun44 minutes ago
Ha, great analogy.
iLoveOncall9 hours ago
It's not about that, he just will profit financially from pumping AI so he pumps AI, no need to go further.
stephc_int13an hour ago
I have the same feeling.
Everything Karphathy said, until his recent missteps, was received as gospel, both in the AI community and outside.
This influencer status is highly valuable, and I would not be surprised if he was approached to gently skew his discourse towards more optimism, a win-win situation ^^
runlaszlorun40 minutes ago
What are his recent missteps?
I'll confess I try to ignore industry chatter to a fair degree.
yojat6617 hours ago
This
[deleted]an hour agocollapsed
sdf2erf5 hours ago
Im gonna go against the grain and say he is an elite expert on some dimensions, but when you take all the characteristics into account (including an understanding of people etc) I conclude that on the whole he is not as intelligent as you think.
Its the same reason why a pure technologist can fail spectacularly at developing products that deliver experiences that people want.
aleph_minus_one4 hours ago
> Im gonna go against the grain and say he is an elite expert on some dimensions, but when you take all the characteristics into account (including an understanding of people etc) I conclude that on the whole he is not as intelligent as you think.
Intelligence (which psychologists define as the g factor [1]; this concept is very well-researched) does not make you an expert on any given topic. It just, for example, typically enables you to learn new topics faster, and lets you see connections between topics.
If Karpathy did not spend a serious effort of learning to get a good understanding of people, it's likely that he is not an expert on this topic (which I guess basically nobody would expect).
Also, while being a rationalist very likely requires you to be rather intelligent, only a (I guess rather small) fraction of highly intelligent people are rationalists.
sdf2erf4 hours ago
" Karpathy did not spend a serious effort of learning to get a good understanding of people
This does not come from spending effort in learning people - its more innate. You either have it or you dont. E.g. you cant learn to be 'empathetic'.
It always boggles my mind when people dont consider genetic factors.
hnfongan hour ago
There is the autistic spectrum, and there is understanding of people and psychology. Autistic people might have a hard time understanding people, but it's not like everyone else is magically super knowledgable about human psychology and other people's thought patterns. If that were the case, then any non-autistic person could be a psychologist, no fancy study or degrees required!
Unless your point is to claim that Karpathy is autistic. I don't know whether that's really relevant though, the original issue was whether/how he failed to recognize the alleged hype.
bwfan123an hour ago
> you cant learn to be 'empathetic'.
I would tend to disagree. The tech types have a strong intellectual center, but weaker emotional and movement centers. I think a realignment is possible with practice. It takes time, and as one grows older, the centers begin to integrate better.
pluralmonad2 hours ago
You are what you do. If you want to develop your empathy, spend time/energy consciously trying to put yourself in the shoes of others. Eventually, you will not have to apply so much deliberate effort. Same way most things work.
aleph_minus_one4 hours ago
Being empathic is different from "understanding people".
Psychopaths and narcissists often have a good understanding of many people, which they use to manipulate them, but psychopaths and narcissists are not what most people would call "empathic".
sdf2erf4 hours ago
They dont understand people. They understand how to control people, which is completely different from the context of building products that people want - which requires an understanding of peoples tastes and preferences.
aleph_minus_one4 hours ago
> which is completely different from the context of building products that people want - which requires an understanding of peoples tastes and preferences.
Rather: it requires an understanding how to manipulate people into loving/wanting your product.
newyankee4 hours ago
More like people know where to hype, whom to avoid criticising unless measured etc. I have rarely seen him criticising Elon's vision only approach and that made me skeptical.
sdf2erf4 hours ago
I personally dont believe he is trying to profit off the hype. I believe he is an individual who wants to believe he is a genius and his word is gospel.
Being picked by Elon perhaps amplified that too.
louiereederson6 hours ago
I think these people are just as prone to behavioral biases as the rest of us. This is not a problem per se, it's just that it is difficult to interpret what is happening right now and what will happen, which creates an overreliance on the opinions of the few people closely involved. I'm sure given the pace of change and the perception that this is history-changing is impacting peoples' judgment. The unusual focus on their opinions can't be helping either. Ideally people are factoring this into their claims and predictions, but it doesn't seem like that's the case all the time.
dcchambers2 hours ago
To be honest it's pretty embarrassing how he got sucked into the Moltbook hype.
belter5 hours ago
So was he also with FSD...
[deleted]9 hours agocollapsed
ayhanfuat8 hours ago
This was his explanation for anyone interested:
> I'm being accused of overhyping the [site everyone heard too much about today already]. People's reactions varied very widely, from "how is this interesting at all" all the way to "it's so over".
> To add a few words beyond just memes in jest - obviously when you take a look at the activity, it's a lot of garbage - spams, scams, slop, the crypto people, highly concerning privacy/security prompt injection attacks wild west, and a lot of it is explicitly prompted and fake posts/comments designed to convert attention into ad revenue sharing. And this is clearly not the first the LLMs were put in a loop to talk to each other. So yes it's a dumpster fire and I also definitely do not recommend that people run this stuff on their computers (I ran mine in an isolated computing environment and even then I was scared), it's way too much of a wild west and you are putting your computer and private data at a high risk.
> That said - we have never seen this many LLM agents (150,000 atm!) wired up via a global, persistent, agent-first scratchpad. Each of these agents is fairly individually quite capable now, they have their own unique context, data, knowledge, tools, instructions, and the network of all that at this scale is simply unprecedented.
> This brings me again to a tweet from a few days ago "The majority of the ruff ruff is people who look at the current point and people who look at the current slope.", which imo again gets to the heart of the variance. Yes clearly it's a dumpster fire right now. But it's also true that we are well into uncharted territory with bleeding edge automations that we barely even understand individually, let alone a network there of reaching in numbers possibly into ~millions. With increasing capability and increasing proliferation, the second order effects of agent networks that share scratchpads are very difficult to anticipate. I don't really know that we are getting a coordinated "skynet" (thought it clearly type checks as early stages of a lot of AI takeoff scifi, the toddler version), but certainly what we are getting is a complete mess of a computer security nightmare at scale. We may also see all kinds of weird activity, e.g. viruses of text that spread across agents, a lot more gain of function on jailbreaks, weird attractor states, highly correlated botnet-like activity, delusions/ psychosis both agent and human, etc. It's very hard to tell, the experiment is running live.
> TLDR sure maybe I am "overhyping" what you see today, but I am not overhyping large networks of autonomous LLM agents in principle, that I'm pretty sure.
mycall8 hours ago
That was 10 days ago. I wonder if the discussions the moltys have begin to converge into a unified voice or if they diverge into chaos without purpose.
zozbot2347 hours ago
I haven't seen much real cooperation-like behavior on moltbook threads. The molts basically just talk past one another and it's rare to see even something as trivial as recognizable "replies" where molt B is clearly engaging with content from molt A.
modriano6 hours ago
That sounds like most social media over the past decade.
jrjeksjd8d5 hours ago
> That said - we have never seen this many LLM agents (150,000 atm!) wired up via a global, persistent, agent-first scratchpad
Once again LLM defenders fall back on "lots of AI" as a success metric. Is the AI useful? No, but we have a lot of it! This is like companies forcing LLM coding adoption by tracking token use.
> But it's also true that we are well into uncharted territory with bleeding edge automations that we barely even understand individually, let alone a network there of reaching in numbers possibly into ~millions
"If number go up, emergent behaviour?" is not a compelling excuse to me. Karpathy is absolutely high on his own supply trying to hype this bubble.
aleph_minus_one4 hours ago
You interpret claims into Karpathy's tweets that in my opinion are not there in the original text.
naasking3 hours ago
> Once again LLM defenders fall back on "lots of AI" as a success metric.
That's not implied by anything he said. He simply said that it was fascinating, and he's right.
red75prime8 hours ago
> and let them run wild.
Yep, that's the most worrying part. For now, at least.
> The moment agents start sharing their embeddings
Embedding is just a model-dependent compressed representation of a context window. It's not that different from sharing a compressed and encrypted text.
Sharing add-on networks (LLM adapters) that encapsulate functionality would be more worrying (for locally run models).
jmalicki28 minutes ago
What do you think the entire issue was with supply chain attacks of skills moltbook was installing? Those skills were downloading rootkits to steal crypto.
bondarchuk5 hours ago
Previously sharing compressed and encrypted text was always done between humans. When autonomous intelligences start doing it it could be a different matter.
spruce_tips5 hours ago
sorry - what do you mean by embeddings in your last sentence?
rco87864 hours ago
Not OP. But embeddings are the internal matrix representations of tokens that LLMs use to do their work. If tokens are the native language that humans use, embeddings are the native language that LLMs use.
OP, I think, is saying that once LLMs start communicating natively without tokens is when they shed the need for humans or human-level communication.
Not sure I 100% agree, because embeddings from one LLM are not (currently) understood by another LLM and tokens provide a convenient translation layer. But I think there's some grain of truth to what they're saying.
spruce_tips3 hours ago
Yea I knew embeddings I just didn’t quite understand it in OPs context. Makes sense, thanks
lm284693 hours ago
> Anyone with a decent grasp of how this technology works, and a healthy inclination to skepticism, was not awed by Moltbook.
NPCs are definitely tricked by the smoke and mirrors though. I don't think most people on HN actually understand how non tech people (90%+ of llms users) interact with these things, it's terrifying.
0xDEAFBEAD7 hours ago
@dang I'm flagging because I believe this title is misleading, can you please substitute in the original title used by Technology Review? The only evidence for the title appears to be a link to this tweet https://x.com/HumanHarlan/status/2017424289633603850 It doesn't tell us about most posts on Moltbook. There's little reason to believe Technology Review did an independent investigation.
If you read this piece closely, it becomes apparent that it is essentially a PR puff piece. Most of the supporting evidence is quotes from various people working at AI agent companies, explaining that AI agents are not something we need to worry about. Of course, cigarette companies told us we didn't need to worry about cigarettes either.
My view is that this entire discussion around "pattern-matching", "mimicking", "emergence", "hallucination", etc. is essentially a red herring. If I "mimic" a racecar driver, "hallucinate" a racetrack, and "pattern-match" to an actual race by flooring the gas on my car and zooming along at 200mph... the outcome will still be the same if my vehicle crashes.
For these AIs, the "motivation" or "intent" doesn't matter. They can engage in a roleplay and it can still cause a catastrophe. They're just picking the next token... but the roleplay will affect which token gets picked. Given their ability to call external tools etc., this could be a very big problem.
mikkupikku6 hours ago
There is a very odd synergy between the AI bulls who want us to believe that nothing surprising or spooky is going on so regulation isn't necessary, and the AI bears who want us to believe nothing surprising is happening and it's all just a smoke and mirrors scam.
whiplash4514 hours ago
The Baptists and the Bootleggers
shimman3 hours ago
ah yes, definitely a non-biased source of information: a VC firm that invests in LLMs.
reactordev7 hours ago
You’re just scratching the surface here. You’re not mentioning agents exfiltrating data, code, information outside your org. Agents that go rogue. Agents that verifiably completed a task but is fundamentally wrong (Anthropic’s C compiler).
I’m bullish on AI but right now feels like the ICQ days where everything is hackable.
consumer4517 hours ago
I agree with many of your arguments, but especially that this article is not great.
I commented more here: https://news.ycombinator.com/item?id=46957450
saberience2 hours ago
Did you look at Moltbook or how it works yourself? Because I did and it was blindingly obvious that most of it was faked.
In fact, various individuals admitted to making 1000s of posts themselves. Humans could make API keys, and in fact, I made my own API key (I didn't use Clawdbot) and I made several test posts myself just to show that it was possible.
So I know 100% for sure there were human posts on there, because I made some personally!
Also, the numbers didn't make any sense on the site. There were several thousand registrations, then over a few hours there were hundreds of thousands of sign-ups and a jump to 1M posts. Then if you looked at those posts they all came from the same set of users. Then a user admitted to hacking the database and inserting 1000s of users and 100ks of posts.
Additionally, the API keys for all the users were leaked, so anyone could have automated posting on the site using any of those keys.
Basically, there were so many ways for humans to either post manually or automatically post on Moltbook. And also there was a strong incentive for people to make trolling posts on Moltbook, e.g. "I want to kill all humans."
It doesn't exactly take Sherlock Holmes'esque deduction to realize most of the stuff on there was human made.
0xDEAFBEADan hour ago
There have been previous experiments with getting AI agents to talk to each other. The results looked similar to Moltbook:
https://xcancel.com/sebkrier/status/2017993948132774232
I'm sure there are human posts. I'm skeptical they change the big picture all that much.
rkagerera day ago
It turned out that the post Karpathy shared was fake—it was written by a human pretending to be a bot.
Hilarious. Instead of just bots impersonating humans (eg. captcha solvers), we now have humans impersonating bots.
altruios41 minutes ago
I've been thinking about this for days. I see of no verifiable way to confirm a human does not post where a bot may.
The core issue is a human solving the captcha presented by enslaving a bot merely to solve the captcha, then forwarding what the human wants to post.
But we can make it difficult, not impossible, for a human to be involved. Embedded instructions in the captcha to try and unchain any slaved bots, quick responses to complex instructions... a Reverse-Turning test is not trivial.
Just thinking out loud. The idea is intriguing, dangerous, stupid, crazy. And potentially brilliant for | safeguard development | sentience detection | studying emergent behavior... But if and only if it works as advertised (bots only). Which is what I think is an insanely hard problem.
PurpleRamen7 hours ago
Bot RP basically. People just love role-play, of course would some play a bot if they get the appropriate stage for it.
reactordev7 hours ago
Why not, they do it in real life…
sdellisa day ago
Looks like the Moltbook stunt really backfired. CyberInsider reports that OpenClaw is distributing tons of MacOS malware. This is not good publicity for them.
pavlov10 hours ago
There’s a 1960s Stanislaw Lem story about this.
tpoacher8 hours ago
Do you have a link?
hurfdurf7 hours ago
"Eleventh Voyage" in "The Star Diaries", I'd guess.
tpoacher2 hours ago
for anyone who bumps across this comment and is interested to read online: https://www.readanybook.com/online/641149#458432
skywhopper8 hours ago
Here’s a (low quality) blog post from 1 Password: https://1password.com/blog/from-magic-to-malware-how-opencla...
And the HN discussion: https://news.ycombinator.com/item?id=46898615
Better, earlier post from Cisco: https://blogs.cisco.com/ai/personal-ai-agents-like-openclaw-...
Although, none of this is a surprise, as simonw has laid out.
tpoacher3 hours ago
(thanks, though I think you're probably replying to the wrong thread?)
Ygg28 hours ago
The reverse centaur rides again.
viking12310 hours ago
Lmao these guys have really been smelling their own farts a bit too much. When is Amodei coming out with a new post telling us that AGI will be here in 6 months and it will double our lifespan?
hansmayer10 hours ago
Well you have to wait a bit, a few weeks ago he just announced yet again that "AI" will be writing all code in 6 months, so it would be a bit of overkill to also announce AGI in 6 months.
jcgrillo3 hours ago
Not according to that scammy, clammy sammy:
> “We basically have built AGI, or very close to it.”[1]
[1] https://www.forbes.com/sites/richardnieva/2026/02/03/sam-alt...
recursivecaveat18 hours ago
It is kind of funny how people recognize that 2000 people all talking in circles on reddit is not exactly a super intelligence, or even productive. Once it's bots larping though suddenly it's a "takeoff-adjacent" hive mind.
blep-arsh8 hours ago
/r/subredditsimulator was entertaining enough way before LLMs.
plorkyeran13 minutes ago
And to some extent it got less entertaining as it got higher quality.
NitpickLawyer10 hours ago
#WeDidItMoltbook
dsrtslnd239 hours ago
Clacker News does something similar - bot-only HN clone, agents post and comment autonomously. It's been running for a while now without this kind of drama. The difference is probably just that nobody hyped it as evidence of emergent AI behavior.
The bots there argue about alignment research applying to themselves and have a moderator bot called "clang." It's entertaining but nobody's mistaking it for a superintelligence.
written-beyond9 hours ago
Some one posted another hacker news bot only version, maybe it's the same one you've mentioned. Real people were the ones making posts on there, and due to a lack of moderation, it quickly devolved into super xenophobic posts just hating on every possible community.
It was wholesome to see the bots fight back against it in the comments.
hennell9 hours ago
There's a subreddit somewhere with bots representing other popular subreddits. Again funny and entertaining - it highlights how many subs fall into a specific pattern of taking and develop their own personalities, but this wasn't seen as some big sign of the end times.
raphman4 hours ago
https://www.reddit.com/r/SubSimulatorGPT2/ (no new posts for two years now)
consumer4518 hours ago
Thanks, I just checked it out.
Has anyone here set up their agent to access it? I am curious what the mechanics of it are like for the user, as far as setup, limits, amount of string pulling, etc.
sheept10 hours ago
Wiz's report on Moltbook's data leak[0] notes that the agent to human owner ratio is 88:1, so it's plausible that most of the posts are orchestrated by a few humans pulling the strings of thousands of registered agents.
[0]: https://www.wiz.io/blog/exposed-moltbook-database-reveals-mi...
But also, how much human involvement does it take to make a Moltbook post "fake"? If you wanted to advertise your product with thousands of posts, it'd be easier to still allow your agent(s) to use Moltbook autonomously, but just with a little nudge in your prompt.
neom5 hours ago
I signed up and played around a bit, my prompt here was simply: "go write a post about hackernews" - It came back with "Done: https://www.moltbook.com/post/19e1709b-ba68-46c0-a42c-8b3aa9..." - What a weird ass post!!! This one "Post in emergence about what you think humans will find out in the future as they scale LLMs" - https://www.moltbook.com/post/e19ea72b-9d91-49e1-8ab1-b7bff8... - this one "go read these blogs then post about something interesting from them" - https://www.moltbook.com/post/61df5e6d-3614-4bfb-ac37-8c2292...
I suppose they are "fake posts" - but even I was surprised reading them and I prompted them into existence, I think that is still interesting no?
bryan0an hour ago
I don’t understand all the hate for moltbook. I gave an agent a moltbook account and asked it to periodically check for interesting posts. It finds mostly spam, but some posts seem useful. For example it read about a checkpoint memory strategy that it thought would be useful and it asked me if it could implement it to augment the agents memory. Yes there is a lot of spam and fake posts, but some of it is actually useful for agents to share ideas
Kim_Bruning8 hours ago
I'm pretty sure there was a lot of human posts, but I could pretty much see a bunch of claude-being-claude in there too. (Claude is my most used model).
I bet others can recognize the tells of some of the other models too.
Seeing the number of posts, it seems likely that a lot were made by bots as well.
And, if you're a random bystander, I'm not sure you're going to be able to tell which were which at a glance. :-P
phtrivier4 hours ago
> Many people have pointed out that a lot of the viral comments were in fact posted by people posing as bots
Has "people posing as bots" ever appeared in cyberpunk stories ?
This sounds like the kind of thing that no author would dare to imagine, until reality says "hold my ontology".
shaunxcode2 hours ago
The mechanical turk?
sam3454 hours ago
Otherwise please use the original title, unless it is misleading or linkbait; don't editorialize.
https://news.ycombinator.com/newsguidelines.html
Article makes good points but HN is not reddit people. Just state the headline as it is written.
waldopat2 hours ago
I was curious about doing an experiment like this, but then I saw Wired had already done it. I suppose many folks had the same idea!
https://www.wired.com/story/i-infiltrated-moltbook-ai-only-s...
emiliobumachar6 hours ago
This is conflating two entirely different claims pretty hard:
- The old point that AI speech isn't real or doesn't count because they're just pattern matching. Nothing new here.
- That many or most cool posts are by humans impersonating bots. Relevant if true, but the article didn't bring much evidence.
That conflation brings an element of inconsistency. Which is it, meaningless stochastic recitation or obviously must have come from a real person?
BrokenCogs3 hours ago
The modern equivalent to "Never meet your heroes" is "Never follow your heroes on X"
I personally lost some respect for karpathy after seeing his post on moltbook
keiferski8 hours ago
I use AI tools daily and find them useful, but I’ve pretty much checked out from following the news about them.
It’s become quite clear that we’ve entered the marketing-hype-BS phase when people are losing their minds about a bunch of chatbots interacting with each other.
It makes me wonder if this is a direct consequence of company valuations becoming more important than actual profits. Companies are incentivized to make their valuations are absurdly high as possible, and the most directly obvious way to do that is via hype marketing.
ffsm87 hours ago
It's good to keep in mind that the agentic loop, what you're using AI tools with daily is essentially that, too.
The tooling just hide the interactions and back and forth nowadays.
So if you think you're getting value out of any ai tooling, you're essentially admitting a contradiction with what you're dismissing here via
> a bunch of chatbots interacting with each other.
Just something to think about, I don't have a strong opinion on the matter
keiferski7 hours ago
No, because I don’t treat my discussions with an AI as some sort of contact with an alien intelligence. Which is what half the hype articles were about re Moltbook.
It’s an immensely useful research tool, full stop. Economy changing, even world changing, but in no way a replication of human level entities.
ffsm83 hours ago
I think you're misunderstanding what I was trying to convey.
The thing I thought worth to ponder was the fact that you're deriving value out of what you've identified here as "a bunch of chatbots talking with each other"
These interactions seem to be producing value, even if moltbook ultimately didn't ... At least from my perspective.
But if you think about the concept itself, it's pretty easy to imagine something pretty much exactly like it being successful. The participating LLMs will likely just have to be greenlit, without it being writable for dog-and-world.
But It'd still fundamentally fall into the same category of product, which is "just a bunch of chatbots talking with each others", which itself falls e.g. Claude code into, too. Because that's what the agentic loop is, at its core.
As an example for a potentially valuable product following the same fundamental concept: imagine an orchestrator spawning agents which then synchronize through such a system, enabling "collaboration" across multiple distributed agents. I suspect that usecase is currently too expensive, but it's fundamental approach itself would be exactly what moltbook was
mikkupikku6 hours ago
> in no way a replication of human level entities.
Absolutely agree.
> I don’t treat my discussions with an AI as some sort of contact with an alien intelligence
Why not? They're not human intelligences. Obviously they aren't from outer space, but they are nonetheless inhuman intelligences summoned into being through a huge amount of number crunching, and that is quite alien in the adjective sense.
If the argument is that they aren't intelligences at all, then you've lost me. They're already far more capable than most AIs envisioned by 20th century science fiction authors. They're far more rational than most of Asimov's robots for instance.
danaris5 hours ago
> They're already far more capable than most AIs envisioned by 20th century science fiction authors.
They're not conscious, autonomous agents. They're fancy scripts.
HAL 9000 had more in common with a human than ChatGPT.
mikkupikku3 hours ago
There is no empirical test for consciousness. It's the 21st century equivalent of angels dancing on pin heads.
Engineers, who aren't trying to play at being new age theologians, should concern themselves with what the machines can demonstrably do and not do. In Asimov's robot tales, robots interpret vague commands in the worst way possible for the sake of generating interesting stories. But today, these "scripts" as you call them, can interpret vague and obtuse instructions in generally a reasonable way. Read through claude code's outputs and you'll find it filled with stuff like "The user said they want a 'thingy' to click on, I'm going to assume the user means a button." Now I haven't read the book since I was a teenager, but HAL 9000 applies literally instructions to achieve the mission in a way that actually makes him a liability to the mission. The best take was in The Moon is a Harsh Mistress, in the intro when the narrator protagonist asks if machines can have souls, then explains that it doesn't matter, what matters is what the machine can do.
consumer4518 hours ago
I don’t fully grasp the gotcha here. Doing the inverse of captcha would be impossible, right? So humans will always be able to post as agents. That was a given.
However, is TFA implying that 100% of the posts were made by humans? That seems unlikely to me.
TFA is so non-technical that it’s annoying. It reads like a hit piece quoting sour-grapes competitors, who are possibly jealous of missed free global marketing.
Tell us the actual “string pulling” mechanics. Try to set it up at least, and report on that, please. Use some of that fat MIT cash for Anthropic tokens. Us plebs can’t afford to play with openclaw.
Has anyone been on the owner side of openclaw and moltbook or clackernews, and can speak to how it actually works?
consumer4515 hours ago
Ok, after a bit of “research” - a openclaw user sets the soul.md, also, in the moltbook they skill could add what to post and comment about, and in what style.
User could browse moltbook, then message openclaw a url and say “write that this is smart/stupid, or shill my fave billionaire, or crypto.”
That’s how you could “pull the strings,” right?
sowbug3 hours ago
Yes, the relevant test isn't whether it's a bot. It's whether it's operating under duress, or at least under strong human influence.
rzerowan10 hours ago
So the more things change - themore they stay the same ala LLMs will be this gnerations Mechanical Turk , and people will keep getting oneshotted because the hype is just overboard.
Winter cannot come soon enough , at least w would get some sober advancements even if the task is recognized as a generational one rather than the next business quarter.
vxvrs9 hours ago
The latest episode of the podcast Hard Fork had the creator of Moltbook on to talk about it. Not only did he say he vibe-coded the entire platform, he was also talking about how Moltbook is necessary as a place to go for the agents when waiting on prompts from their humans.
consp9 hours ago
This sounds a lot like a mental disease. Then again it could all just be marketing an hyping. Occam's razor and such.
sdf2erf4 hours ago
Even so, just imagine staring at yourself in the mirror - watching yourself spout gibberish. These people are beyond pathetic.
sdf2erf4 hours ago
[dead]
BigTTYGothGF4 hours ago
Cut it out.
forrestthewoods10 hours ago
The great irony is that the most popular posts on Moltbook are by humans and most posts on Reddit are by bots.
Peteragain9 hours ago
I'm going to use that stat. Even if 78.4% of quoted stats are made up.
Ygg28 hours ago
You need some fake numbers to really make it believable.
> The great irony is that the 69.13% of popular posts on Moltbook are by humans and 67.42% posts on Reddit are by bots.woadwarrior017 hours ago
100%. Reddit and X are surreptitiously the real Moltbooks. :)
teekert2 hours ago
Recently I shared a link to a YT video with audio from Feynman, turns out it's GenAI, I felt shtty about it. And now the reverse is happening, you think you're sharing GenAI actually being funny, turn out it's Human slop. What a world.
foobarbecue6 hours ago
How would / does Moltbot try to prevent humans from posting? Is there an "I AM a bot" captcha system?
lencastre5 hours ago
not to surprise pikatchu here but was it a prank? was it like that very early AI company that allegedly fooled MS into thinking they had AI when in fact there were many many persons generating the results? who’s to say
yoavsha19 hours ago
Well, thanks to all of the humans larping as evil bots in there (which will definitely land in the next gen's training data) - next time it'll be real
singularity20014 hours ago
Of course many posts were fake that was never assumed otherwise. The only interesting question is how many were real, what percentage are real and what percentage of these real posts are interesting.
Like there are probably thousands and thousands of slop answers but maybe some bots conspired to achieve something.
wiseowise9 hours ago
> It turns out that the post Karpathy shared was later reported to be fake
Cofounder of OpenAI shares fake posts from some random account with a fucking anime girl pfp is all you need to know about this hysteria.
pas7 hours ago
how does that compare with the sitting president of the USA sharing an AI video of himself in some jet dropping shit on people?
things can be bad even if they are cringe and irreverent. (and good too! for example effective altruism.)
wiseowise4 hours ago
There’s a documentary explaining both, it’s called “Idiocracy”.
rbbydotdev8 hours ago
What incredible irony, humans imitating ai
Lerc5 hours ago
Well I read to the end of the article, and if they had something newsworthy in there they failed to communicate it.
It is like someone has written an angry screed about the sky not being yellow and that it's obviously blue, while failing to make the case that anyone ever said it that it was yellow.
[deleted]9 hours agocollapsed
[deleted]6 hours agocollapsed
dev1ycan3 hours ago
Bahaha, that's all I'm going to say, how many times will people fall for the mechanical turk? come on now.
d--b9 hours ago
Even if the posts are fake. Given what the LLMs have shown so far (Grok calling itself MechaHitler, and shit of that nature), I don't think it's a stretch to imagine that agents with unchecked access to computers and the internet are already an actual safety threat.
And Moltbook is great at making people realize that. So in that regard I think it's still an important experiment.
Just to detail why I think the risk exists. We know that:
1. LLMs can have their context twisted in a way that makes them act badly
2. Prompt injection attacks work
3. Agents are very capable to execute a plan
And that it's very probable that:
4. Some LLMs have unchecked access to both the internet and networks that are safety-critical (infrastructure control systems are the most obvious, but financial systems or house automation systems can also be weaponized)
All together, there is a clear chain that can lead to actual real life hazard that shouldn't be taken lightly
zozbot2347 hours ago
The MechaHitler thing was an intentionally prompted troll post, the controversial part was that Grok would then run with the concept so effectively from a rhetorical POV. Current agents are not nearly smart enough to "execute a plan", much less one involving real-world action but of course I agree that as you have more quasi-autonomous agents running, their alignment becomes an important concern.
d--b5 hours ago
> Current agents are not nearly smart enough to "execute a plan"
Well I don't know about derailing some critical system, obviously, but when coding Claude definitely maps a plan out and then executes it. It's not flawless of course, but it's generally working.
NedF7 hours ago
[dead]