glerkan hour ago
Personal as in Meta gets your personal data so they can sell you more ads.
2pointsomone38 minutes ago
[flagged]
hackrmn2 hours ago
The hero image on the linked page, which consists of a muted teal background with the words "Introducing Muse Spark", weighs in at 3,5MB. I don't even...
KerrickStaley14 minutes ago
"Please don't complain about tangential annoyances—e.g. article or website formats, name collisions, or back-button breakage. They're too common to be interesting."
- Hacker News Guidelines https://news.ycombinator.com/newsguidelines.html
yawnxyz3 minutes ago
I think this speaks to the product release iself
Overpower0416an hour ago
lol it literally took me 2s to google search "optimize image for website" and 10s to upload and get a smaller sized image.
The result for that specific image is: 500kb. 85% decrease in size
BugsJustFindMe34 minutes ago
An indistinguishable JPG is 170KB. An SVG would be 20KB.
levocardia18 minutes ago
CSS with a linear gradient background would be even smaller :)
sofixaan hour ago
You can even automatically do that on your CDN/delivery/web server layer. Or as part of your web deployment pipeline.
Overpower041644 minutes ago
Yes, but it might be a little too advance for Meta ;)
re-thc11 minutes ago
But they have personal superintelligence?
hungryhobbit2 hours ago
Someday our robot overlords will be intelligent enough to ... optimize images!
(But today is not that day.)
zfol_510an hour ago
And it doesn't even look high-res.
Invictus02 hours ago
complaining about sand on the beach
fooquxan hour ago
It's not sand on the beach, it's garbage on the beach.
hackrmn2 hours ago
I am simply offended. By Meta's lack of sensibilities (or ability) towards use of images on the Web while touting their new flavour of artificial intelligence as a product.
Invictus034 minutes ago
old man shouts at cloud
hackrmn26 minutes ago
more like old man shouts at someone else's computer
daft_pink2 hours ago
This really reinforces the idea that the AI race and the Railroad Mania of the 19th century are very similar.
So many different companies are going to have similarly powerful ai that there will be no moat around it and it will be cheap. They will never earn their investment back.
cheriot14 minutes ago
I suspect this is the real reason behind Anthropic limiting subscriptions to their own products and keeping API prices several times higher than comparable models. Applications more sticky than API users and less technical users more sticky than programmers (ie Cowork more sticky than Code).
dist-epoch2 hours ago
The moat is in the compute and the energy access.
And further down the line in chips, which is why Elon is building a fab now.
There are plenty of capable models on HuggingFace, yet I have no way of running them.
khalic2 hours ago
Give it a few years, or month. Tiny models are getting outrageously good
spprashant15 minutes ago
I wonder if this is why the tech cartel is buying up all the hardware?
If the average user gets convinced they could run LLMs for cheap at home, you cannot trap users in your walled garden anymore.
mobattahan hour ago
Exactly. We’ll see the cost of AI continue to drop.
I was saying this for years about Tesla’s FSD - they finally had to give in and drop the price to stay competitive.
cedws40 minutes ago
That fab will never be delivered. In five years you might see the manufacturing equivalent of a person dancing in spandex.
nutjob236 minutes ago
> which is why Elon is building a fab now
At least he says he's doing that. It doesn't really make sense since you're not going to achieve an advanced node from a standing start in a practical time frame and cost.
Sounds like more Musk flavored vapor.
re-thc10 minutes ago
> It doesn't really make sense since you're not going to achieve an advanced node from a standing start in a practical time frame and cost.
They already announced a partnership with Intel.
moab3 hours ago
"Muse Spark is available now, and Contemplating mode will be rolling out gradually in meta.ai."
How does one get their hands on these models? They are not open-source, right? I go to meta.ai, but it's just a chat interface---no equivalent to codex or claud code? Can you use this through OpenCode? Is meta charging for model access, or is the gathering of chat data a sufficiently large tithe?
meetpateltech3 hours ago
"It will be available in private preview via API to select partners, and we hope to open-source future versions of the model."
from Facebook Newsroom: https://about.fb.com/news/2026/04/introducing-muse-spark-met...
tempaccount4202 hours ago
I can't think of any "select partners" that would want to use this non-SOTA model. Just put it on OpenRouter.
giancarlostoro2 hours ago
If Microsoft is a select partner, maybe they could shove it into Copilot for VS or something, but yeah, I'm wondering the same, maybe Apple could be one of their partners too?
monkeydust3 hours ago
TBD it seems. So far the only explained usage pattern is through a Meta product (Whatsapp, Facebook, Instagram).
moab3 hours ago
So to verify their claims and see how strong these models are, the answer is "believe us"?
Note: I'm expressing some skepticism here largely due to how recent rollouts from Meta flopped. Sincerely hoping that they do better this time around!
nemomarx3 hours ago
I assume the answer is try it out in the chat mode? You could run your usual benches through that right
pstuart3 hours ago
I appreciate that they build this stuff for their own benefit, but I don't want to feed even more of my private info. Hopefully the models will become public or lead to equivalent models from other sources.
eranation28 minutes ago
Sarcasm aside, tried it (with instant mode), it's an impressive model.
It nailed all the ChatGPT meme gotchas (walk to the carwash, Alice 50 brothers, upside down cup, R's in strawberry, which number is bigger, 9.11 or 9.9?)
I guess all that money poaching OpenAI / Anthropic talent went somewhere...
Now, would I use "Meta Muse Code" or "Muse CoWork" if I have to have a facebook account to all of my developers? Maybe not.
Would I use it via an API key? I might, depends on the pricing!
turtlesdown1121 minutes ago
so since they hard programmed all of the meme gotchas, they built a good model?
ddp263 hours ago
The second paragraph starts "Muse Spark is the first step on our scaling ladder and the first product of a ground-up overhaul of our AI efforts. To support further scaling, we are making strategic investments..."
This article is about Meta, not about the user. Who signs off on these? Is the intended audience other people at Meta, not the user?
tjkrusinski3 hours ago
The article is published primarily to signal to the market that Meta is serious in its efforts to compete in building frontier ai models.
They want to 1) attract talent, 2) tell wall street they can play in this space as well, 3) help employees feel the company is moving in the right direction.
A frontier LLM doesn't apply to their core consumer products.
Lihh272 hours ago
the blog is the product. investor deck posted as a tech launch
conradkay2 hours ago
Stock up 9% today, very pleasant for Zuck if you do the math on his net worth :)
hungryhobbitan hour ago
I mean, kinda? It's not like Zuck is selling his stock tomorrow, so daily fluctuations in stock price don't really affect him.
GalaxyNova32 minutes ago
It is unfortunate that they decided to stop doing open-weight releases.
What could have been interesting has been reduced to simply another subpar LLM release.
hvass41 minutes ago
Genuine question: Why release this the day after Mythos? It does not appear SOTA (just based on benchmarks). OpenAI will likely release Spud tomorrow.
eranation27 minutes ago
That's a really good question, my sarcastic mind thinks that Anthropic rushed the Mythos announcement of fears of Meta stealing their thunder... (I guess someone leaked that, a LOT of anthropic folks are ex meta... so, you know)
Just a speculation, I have no real knowledge about it.
plombe32 minutes ago
Looks like a lightweight article. But memory usage went from 316MB -> 502 MB when I hit refresh. Not sure why? Any one have any ideas? Why does it need half a gig of ram in the first place?
khurdulaan hour ago
"we hope to open-source future versions of the model."
Love to see it. Cheers!
nharadaan hour ago
Saying nothing about the actual performance of this model, it does strike me how .... minimal(?) this announcement is. Their safety section is like 2 paragraphs about bioweapons. Go look at the reports for OpenAI and Anthropic's model releases. It's like 50+ pages of tests, examples, reports, and benchmarks across a bunch of safety and wellfare metrics.
If Meta wants to be seen as a cutting edge massive lab they need to come across as one instead of looking like a school project version of a frontier model.
WarmWashan hour ago
Rumor on the ground is that they expected a much stronger model than this one.
htrp34 minutes ago
llama4 behemoth problems?
nubgan hour ago
Can you elaborate?
BugsJustFindMe20 minutes ago
I'm struck by all these independent announcements saying "look at our new model that we only spent $N Billion in acquisitions and hardware time to build and operate that's just like those other ones but this one is ours." Because if any of these companies would simply pool resources and work together, and if the government actively participated in providing funds, they'd be able to accelerate AI so much faster. It all feels incredibly wasteful. But I guess that's communism or something.
victorbjorklund18 minutes ago
Competition often foster innovation. Why are they innovating so fast and spending so much money? Because they don’t wanna get behind. If there was no competition at all then there would be much less reason to innovate and spend resources.
dhruvyads29 minutes ago
Sad to see it's not going to be open source.
chankstein382 hours ago
Personal Superintelligence made me think this was an open-source model being released and I was excited. Then I continued reading and I'll just wait until the model comes out.
nubgan hour ago
NOTHING about this is personal! No weights were released!
ChrisArchitect2 hours ago
Associated Meta news post with consumer-friendly takes: https://about.fb.com/news/2026/04/introducing-muse-spark-met...
sidcool2 hours ago
Meta.ai has muse spark
aivillage_team34 minutes ago
[dead]