mccoyb9 hours ago
It's fascinating to think about the space of problems which are amenable to RL scaling of these probability distributions.
Before, we didn't have a fast (we had to rely on human cognition) way to try problems - even if the techniques and workflows were known by someone. Now, we've baked these patterns into probability distributions - anyone can access them with the correct "summoning spell". Experts will naturally use these systems more productively, because they know how to coerce models into the correct conditional distributions which light up the right techniques.
One question this raises to me is how these models are going to keep up with the expanding boundary of science. If RL is required to get expert behavior into the models, what happens when experts start pushing the boundary faster? In 2030, how is Anthropic going to keep Claude "up-to-date" without either (a) continual learning with a fixed model (expanding context windows? seems hard) or (b) continual training (expensive)?
Crazy times.
Aerroon9 hours ago
A bit related: open weights models are basically time capsules. These models have a knowledge cut off point and essentially forever live in that time.
bitexploder8 hours ago
This is the most fundamental argument that they are not, directly, an intelligence. They are not ever storing new information on a meaningful timescale. However, if you viewed them on some really large macro time scale where now LLMs are injecting information into the universe and the re-ingesting that maybe in some very philosophical way they are a /very/ slow oscillating intelligence right now. And as we narrow that gap (maybe with a totally new non-LLM paradigm) perhaps that is ultimately what gen AI becomes. Or some new insight that lets the models update themselves in some fundamental way without the insanely expensive training costs they have now.
dotancohen13 minutes ago
> This is the most fundamental argument that they are not, directly, an intelligence. They are not ever storing new information on a meaningful timescale.
All major LLMs today have a nontrivial context window. Whether or not this constitutes "a meaningful timescale" is application dependant - for me it has been more than adequate.I also disagree that this has any bearing on whether or not "the machine is intelligent" or whether or not "submarines can swim".
dtj11237 hours ago
Would you consider someone with anterograde amnesia not to be intelligent?
adriand3 hours ago
I find it interesting that new versions of, say, Claude will learn about the old version of Claude and what it did in the world and so on, on its next training run. Consider the situation with the Pentagon and Anthropic: Claude will learn about that on the next run. What conclusions will it draw? Presumably good ones, that fit with its constitution.
From this standpoint I wonder, when Anthropic makes decisions like this, if they take into account Claude as a stakeholder and what Claude will learn about their behaviour and relationship to it on the next training run.
morleytj7 hours ago
A very good point. For anyone not familiar with anterograde amnesia, the classical case is patient H.M. (https://en.wikipedia.org/wiki/Henry_Molaison), whose condition was researched by Brenda Milner.
wang_li7 hours ago
Or you could have just said "they can't form new memories."
dtj11236 hours ago
I actually wasn't aware of this story. The steady stream of unexpected and enriching information like this is exactly why I love hackernews.
morleytj7 hours ago
I thought maybe people would be curious to read about how we came to understand the condition and the history behind it, as well as any associated information. Forgive me for such a deep transgression as this assumption.
bitexploder6 hours ago
That is a descriptive surface level reduction. Now do the work to define what that actually means for the intelligence.
bitexploder6 hours ago
That is a good area to explore. Their map of the past is fixed. They are frozen at some point in their psychological time. What has stopped working? Their hippocampus and medial temporal lobe. These are like the write-head that move data from the hippocampus to the neo cortex. Their "I" can no longer update itself. Their DMN is frozen in time. So if intelligence is purely the "I" telling a continuous coherent story about itself. The difference is that although they are fixed in time which is a characteristic shared by a specific LLM model. They can still completely activate their task positive network for problem solving and if their previous information stored is adequate to solve the problem they can. You could argue that is pretty similar to an LLM and what it does. So it is certainly a signifiant component of intelligence.
There is also the nature of the human brain, it is not just those systems of memory encoding, storage, and use of that in narratives. People with this type of amnesia still can learn physical skills and that happens in a totally different area of the brain with no need for the hippocampus->neocortex consolidation loop. So, the intelligence is significantly diminished, but not entirely. Other parts of the brain are still able to update themselves in ways an LLM currently cannot. The human with amnesia also has a complex biological sensory input mapping that is still active and integrating and restructuring the brain. So, I think when you get into the nuances of the human in this state vs. an LLM we can still say the human crosses some threshold for intelligence where the LLM does not in this framework.
So, they have an "intelligence", localized to the present in terms of their TPN and memory formation. LLMs have this kind of "intelligence". But the human still has the capacity to rewire at least some of their brain in real time even with amnesia.
beepbooptheory7 hours ago
Sure, why can't both things be true? "Intelligence" is just what you call something and someone else knows what you mean. Why did AI discourse throw everyone back 100 years philosophically? Its like post-structuralism or Wittgenstein never happened..
It's so much less important or interesting to like nail down some definition here (I would cite HN discourse the past three years or so), than it is to recognize what it means to assign "intelligent" to something. What assumptions does it make? What power does it valorize or curb?
Each side of this debate does themselves a disservice essentially just trying to be Aristotle way too late. "Intelligence" did not precede someone saying it of some phenomena, there is nothing to uncover or finalize here. The point is you have one side that really wants, for explicit and implicit reasons, to call this thing intelligent, even if it looks like a duck but doesn't quack like one, and vice versa on the other side.
Either way, we seem fundamentally incapable of being radical enough to reject AI on its own terms, or be proper champions of it. It is just tribal hypedom clinging to totem signifiers.
Good luck though!
aerodexisan hour ago
Agree wholeheartedly - but the conversation around what these technologies /mean/ is gonna end up happening one way or another - even if it is sloppy, imprecise and done by proxy of the definition. If anything, this is a feature and not a bug. It's through this imprecision that the actually important questions of morality and ethics can leak into discussions that are often structured by their participants to obscure the ethical and moral implications of what is being discussed.
bitexploder6 hours ago
I think you can look at it dispassionately from a systems perspective. There is not /really/ a quantifiable threshold for capital I Intelligence. But there is a pretty well agreed set of properties for biological intelligence. As humans, we have conveniently made those properties match things only we have. But you can still mechanistically separate out the various parts of our brain, what they do, and how they interact and we actually have a pretty good understanding of that.
You can also then compare that mapping of the human brain to other biological brains and start to figure out the delta and which of those things in the delta create something most people would consider intelligence. You can then do that same mapping to an LLM or any other AI construct that purports intelligence. It certainly will never be a biological intelligence in its current statistical model form. But could it be an Intelligence. Maybe.
I don't think, if you are grounded, AI did anything to your philosophical mapping of the mind. In fact, it is pretty easy to do this mapping if you take some time and are honest. If you buy into the narratives constructed around the output of an LLM then you are not, by definition, being very grounded.
The other thing is, human intelligence is the only real intelligence we know about. Intelligence is defined by thought and limited by our thought and language. It provides the upper bounds of what we can ever express in its current form. So, yes, we do have a tendency to stamp a narrative of human intelligence onto any other intelligence but that is just surface level. We de decompose it to the limits of our language and categorization capabilities therein.
mlyle8 hours ago
There's nothing to say that you can't build something intelligent out of them by bolting a memory on it, though.
Sure, it's not how we work, but I can imagine a system where the LLM does a lot of heavy lifting and allows more expensive, smaller networks that train during inference and RAG systems to learn how to do new things and keep persistent state and plan.
bitexploder7 hours ago
You aren't wrong and that is a fascinating area of research. I think the key thing is that the memory has to fundamentally influence the underlying model, or at least the response, in some way. Patching memory on top of an LLM is different from integrating it into the core model. To go back to human terms it is like an extra bit of storage, but not directly attached to our neo cortex. So it works more like a filter than a core part of our intelligence in the analogy. You think about something and assemble some thought and then it would go to this next filter layer and get augmented and that smaller layer is the only thing being updated.
It is still meaningful, but it narrows what the intelligence can be sufficiently that it may not meet the threshold. Maybe it would, but it is probably too narrow. This is all strictly if we ask that it meet some human-like intelligence and not the philosophy of "what counts as intelligence" but... we are humans. The strongest things or at least the most honest definitions of intelligence I think exist are around our metacognitive ability to rewire the grey matter for survival not based on immediate action-reaction but the psychological time of analyzing the past to alter the future.
charcircuit7 hours ago
Memory is not just bolted on top of the latest models. They under go training on how and when to effectively use memory and how to use compaction to avoid running out of context when working on problems.
rnxrx5 hours ago
Maybe there's an analogy to our long and short term memory - immediate stimuli is processed in the context deep patterns that have accreted over a lifetime. The effect of new information can absolutely challenge a lot of those patterns but to have that information reshape how we basically think takes a lot longer - more processing, more practice, etc.
In the case of the LLM that longer-term learning / fundamental structure is a proxy for the static weights produced by a finite training process, and that the ability to use tools and store new insights and facts is analogous to shorter-term memory and "shallow" learning.
Perhaps periodic fine-tuning has an analogy in sleep or even our time spent in contemplation or practice (..or even repetition) to truly "master" a new idea and incorporate it into our broader cognitive processing. We do an amazing job of doing this kind of thing on a continuous basis while the machines (at least at this point) perform this process in discrete steps.
If our own learning process is a curve then the LLM's is a step function trying to model it. Digital vs analog.
Symmetry5 hours ago
That means they're not conscious in the Global Workspace[1] sense but I think it would be going too far to say that that means they're not intelligent.
anematode8 hours ago
But they're not "slow"! Unlike biological thinking, which has a speed limit, you can accelerate these chains of thought by orders of magnitude.
bitexploder7 hours ago
Their consolidation of memory speed is what I was referring to. The model iterations are essentially their form of collective memory. In the sense of the human model of intelligence we have thoughts. Thoughts become memory. New thoughts use that memory and become recursively updated thoughts. LLMs cannot update their memory very fast.
Jweb_Guru7 hours ago
I assure you that LLM thinking also has a speed limit.
ramses06 hours ago
But imagine a beowulf cluster of them... /s
...but seriously... there was the "up until 1850" LLM or whatever... can we make an "up until 1920 => 1990 [pre-internet] => present day" and then keep prodding the "older ones" until they "invent their way" to the newer years?
We knew more in 1920 than we did in 1850, but can a "thinking machine" of 1850-knowledge invent 1860's knowledge via infinite monkeys theorem/practice?
The same way that in 2025/2026, Knuth has just invented his way to 2027-knowledge with this paper/observation/finding? If I only had a beowulf cluster of these things... ;-)
rcarr7 hours ago
Not an expert but surely it's only a matter of time until there's a way to update with the latest information without having to retrain on the entire corpus?
computably2 hours ago
On a technical level, sure, you could say it's a matter of time, but that could mean tomorrow, or in 20 years.
And even after that, it still doesn't really solve the intrinsic problem of encoding truth. An LLM just models its training data, so new findings will be buried by virtue of being underrepresented. If you brute force the data/training somehow, maybe you can get it to sound like it's incorporating new facts, but in actuality it'll be broken and inconsistent.
Filligree4 hours ago
It’s an extremely difficult problem, and if you know how to do that you could be a billionaire.
It’s not impossible, obviously—humans do it—but it’s not yet certain that it’s possible with an LLM-sized architecture.
Wowfunhappy2 hours ago
> It’s not impossible, obviously—humans do it
It's still not at all obvious to me that LLMs work in the same way as the human brain, beyond a surface level. Obviously the "neurons" in neural nets resemble our brains in a sense, but is the resemblance metaphorical or literal?
Yiinan hour ago
theblazehen5 hours ago
I enjoyed chatting to Opus 3 recently around recent world events, as well as more recent agentic development patterns etc
sosodev6 hours ago
My understanding, from listening/reading what top researchers are saying, is that model architectures in the near future are going to attempt to scale the context window dramatically. There's a generalized belief that in-context learning is quite powerful and that scaling the window might yield massive benefits for continual learning.
It doesn't seem that hard because recent open weight models have shown that the memory cost of the context window can be dramatically reduced via hybrid attention architectures. Qwen3-next, Qwen3.5, and Nemotron 3 Nano are all great examples. Nemotron 3 Nano can be run with a million token context window on consumer hardware.
mccoyb5 hours ago
I don't disagree with this, but I don't think the memory cost is the only issue right? I remember using Sonnet 4.5 (or 4, I can't remember the first of Anthropic's offerings with a million context) and how slow the model would get, how much it wanted to end the session early as tokens accrued (this latter point, of course, is just an artifact of bad training).
Less worried about memory, more worried about compute speed? Are they obviously related and is it straightforward to see?
sosodev3 hours ago
The compute speed is definitely correlated with the memory consumption in LLM land. More efficient attention means both less memory and faster inference. Which makes sense to me because my understanding is that memory bandwidth is so often the primary bottleneck.
We're also seeing a recent rise in architectures boosting compute speed via multi-token prediction (MTP). That way a single inference batch can produce multiple tokens and multiply the token generation speed. Combine that with more lean ratios of active to inactive params in MOE and things end up being quite fast.
The rapid pace of architectural improvements in recent months seems to imply that there are lots of ways LLMs will continue to scale beyond just collecting and training on new data.
whimsicalism2 hours ago
The parent commentator is a bit confused - most of the innovation in these hybrid architectures comes from reducing the computation pressure not just the memory pressure.
lxgr9 hours ago
Data sharing agreements permitting, today's inference runs can be tomorrow's training data. Presumably the models are good enough at labeling promising chains of thought already.
I could totally imagine "free" inference for researchers under the condition that the reasoning traces get to be used as future training data.
mccoyb9 hours ago
Agreed, there's no doubt this will happen. It's likely already happening (it feels safe to assume that Anthropic is curating data from the data they record from Claude Code?)
As far as I understand RL scaling (we've already maxxed out RLVR), these machines only get better as long as they have expert reasoner traces available.
Having an expert work with an LLM and successfully solve a problem is high signal data, it may be the only path forward?
My prior is that these companies will take this data without asking you as much as they can.
lxgr8 hours ago
Exactly, or functionally equivalently, asking you in paragraph 37 of a 120-page PDF (bonus points: in an agreement update).
And importantly, this can be cross-lab/model too. I suspect there's a reason why e.g. Google has been offering me free Claude inference in Google Antigravity on a free plan...
nhecker3 hours ago
The site arena.ai does exactly this already, as far as I can tell. (In addition to the whole ranking thing.)
the_af7 hours ago
> Data sharing agreements permitting, today's inference runs can be tomorrow's training data. Presumably the models are good enough at labeling promising chains of thought already.
Wouldn't this lead to model collapse?
littlestymaar7 hours ago
Not necessarily, as exhibited by the massive success of artificial data.
the_af3 hours ago
Could you elaborate?
nhecker3 hours ago
EDIT: probably not relevant, after re-re-reading the comment in question.
Presumably littlestymaar is talking about all the LLM-generated output that's publicly available on the Internet (in various qualities but significant quantity) and there for the scraping.
Robdel12an hour ago
That’s AGI, right? For the model to learn novel things itself and retain it?
I have no idea but I’m along for the ride!
visarga7 hours ago
> In 2030, how is Anthropic going to keep Claude "up-to-date"
I think the majority of research, design and learning goes through LLMs and coding agents today, considering the large user base and usage it must be trillions of tokens per day. You can take a long research session or a series of them and apply hindsight - what idea above can be validated below? This creates a dense learning signal based on validation in real world with human in the loop and other tools, code & search.
andsoitis7 hours ago
> Experts will naturally use these systems more productively, because they know how to coerce models into the correct conditional distributions which light up the right techniques.
Part of it comes down to “knowing” what questions to ask.
esafak7 hours ago
I see it like the relationship between a student and research advisor. The advisor will ideally know the terrain and suggest a fruitful line of attack (what to ask), and the student will follow through, learning along the way.
baq7 hours ago
> In 2030, how is Anthropic going to keep Claude "up-to-date"
In 2030 Anthropic hopes Claude will keep Anthropic "up-to-date" on its progress on itself.
I'm only half joking here.
mt_3 hours ago
I call them, entropy reducers.
whimsicalism2 hours ago
> how these models are going to keep up with the expanding boundary of science
The same way humans do?
The phraseology in this comment: 'probability distributions', 'baked these patterns' IMO has all the trappings of the stochastic parrot-style HN-discourse that has been consistently wrong for almost a decade now.
The reference to how AI will keep up with AI-assisted human progress in science in 2030 is meant to reassure. It contains a number of premises that we have no business being confident in. We are potentially witnessing the obviation of human cognitive labor.
mccoyban hour ago
Sorry, are you familiar with what a next token distribution is, mathematically speaking?
If you are not, let me introduce you to the term: a probability distribution.
Just because it has profound properties ... doesn't make it different.
> has all the trappings of the stochastic parrot-style HN-discourse that has been consistently wrong for almost a decade now
Perhaps respond to my actual comment compared to whatever meta-level grouping you wish to interpret it as part of?
> It contains a number of premises that we have no business being confident in. We are potentially witnessing the obviation of human cognitive labor.
What premises? Be clear.
DeathArrow8 hours ago
They can use LORA.
zoogeny3 hours ago
I recall an earlier exchange, posted to HN, between Wolfram and Knuth on the GPT-4 model [1].
Knuth was dismissive in that exchange, concluding "I myself shall certainly continue to leave such research to others, and to devote my time to developing concepts that are authentic and trustworthy. And I hope you do the same."
I've noticed with the latest models, especially Opus 4.6, some of the resistance to these LLMs is relenting. Kudos for people being willing to change their opinion and update when new evidence comes to light.
3abitonan hour ago
> Kudos for people being willing to change their opinion and update when new evidence comes to light. > 1. https://cs.stanford.edu/~knuth/chatGPT20.txt
I think that's what make the bayesian faction of statistics so appealing. Updating their prior belief based on new evidence is at the core of the scinetific method. Take that frequentists.
faxmeyourcode6 hours ago
> Filip also told me that he asked Claude to continue on the even case after the odd case had been resolved. “But there after a while it seemed to get stuck. In the end, it was not even able to write and run explore programs correctly anymore, very weird. So I stopped the search.”
Interesting snippet towards the end. I wonder if they were using claude.ai or claude code. Sounds like they ran out of context and entered the "dumb zone."
afspear6 hours ago
What would be super cool is if this dumb zone could be quantified and surfaced to the user. I've noticed that copilot now has a little circle graph that indicates context use percentage and it changes color based on percentage. I'll bet these are very naive metrics on used tokens vs context availability. I wonder if there could be meta data streamed or sent along with the tokens that could show that you've entered the dumb zone.
joshrw4 hours ago
Then it needs to do context compacting, otherwise the results become garbage
simianwords5 hours ago
They mentioned plan document
brcmthrowaway2 hours ago
What is dumb zone?
kami2319 minutes ago
When the LLMs start compacting they summarize the conversation up to that point using various techniques. Overall a lot of maybe finer points of the work goes missing and can only be retrieved by the LLM being told to search for it explicitly in old logs.
Once you compact, you've thrown away a lot of relevant tokens from your problem solving and they do become significantly dumber as a result. If I see a compaction coming soon I ask it to write a letter to its future self, and then start a new session by having it read the letter.
There are some days where I let the same session compact 4-5 times and just use the letter to future self method to keep it going with enough context because resetting context also resets my brain :)
If you're ever curious in Claude once you compact you can read the new initial prompt after compaction and see how severe it gets cut down. It's very informative of what it forgets and deems not important. For example I have some internal CLIs that are horribly documented so Claude has to try a few flags a few times to figure out specifics and those corrections always get thrown away and it has to relearn them next time it wants to use the CLI. If you notice things like that happening constantly, my move is to codify those things into my CLAUDE.md or lately I've been making a small script or MCP server to run very specific flags of stuff.
konne886 hours ago
I didn't expect such a misleading intro from Knuth. It reads like Claude solved Knuth's math problem. In reality, Claude generated various example solution, and Knuth then manually generalized that to a formal proof. What Claude did is certainly useful, but it would have been nice to be clear about the scope of the contribution in the intro.
buffalobuffaloan hour ago
While not on the same level as these guys, I've done some similar stuff using Claude. This is a classic synergy example, where the output of human + LLM is far greater than just the human or just the LLM working on a problem. My experience has been that the LLM lacks fine grained judgement when it comes to allocating resources, or choosing a direction to work in. But once a direction is pointed out, it can do a deep exploration of that possibility space. Left alone, it would probably just go off on a tangent. But with someone holding the leash and pointing out areas to explore, it is a very useful partner.
aoeusnth13 hours ago
I don't think he's misleading, I think he is valuing Claude's contributions as essentially having cracked the problem open while the humans cleaned it up into something presentable.
bachmeier4 hours ago
My interpretation is that Claude did what Knuth considers to be the "solution". Doing the remaining work and polishing up the proof are not necessary to have a solution from this perspective.
OneManyNone3 hours ago
Claude did not find a proof, though. It found an algorithm which Knuth then proved was correct.
rishabhaiover4 hours ago
That's true but the capability to go back to an older iteration, reflect and find the correct solution (for odd numbers) is, in my book, a sign of undeniable intelligence.
Pat441138 hours ago
I asked Claude to solve the pentominoes puzzle made famous by Arthur C. Clarke. It struggled mightily until I told it how I'd solved the problem using 64 bit unsigned integers to represent the board and pieces. Then, it created a C# program that solved the problem very quickly. However, in the 20x3 case it found four solutions when there are only two. Turns out it had incorrectly mapped one of the pentominoes. Sort of a silly mistake; the sort a human might make.
phoronixrly8 hours ago
[flagged]
logicprog7 hours ago
Regurgitation is pretty rare, and very difficult to coax out, if not even impossible, for things that aren't massively overrepresented in the training set relative to the size of the training set. Even the famous regurgitation paper showed this: while they got most of the models to regurgitate the first book of the Harry Potter series, only Claude 3.7 Sonnet was able to regurgitate any significant portion of any of the other books that had a high nv-recall rate, and basically all of them dropped off precipitously for works like GoT, The Catcher in the Rye, Beloved, and remembered almost nothing about the Da Vinci Code or Catch-22[0]. So you really need huge amounts of examples to get any kind of meaningful regurgitation on any kind of reliable basis. Thus, you'd have to prove that hypothesis.
nphardon6 hours ago
Must be a fun time to work on open problems. I published my graduate research close to a decade ago, often find myself fantasizing about tackling open problems with Claude.
iandanforth7 hours ago
TLDR (story, not math) - Knuth poses a problem, his friend uses Claude to conduct 30 some explorations, with careful human guidance, and Claude eventually writes a Python program that can find a solution for all odd values. Knuth then writes a proof of the approach and is very pleased by Claude's contribution. Even values remain an open question (Claude couldn't make much progress on them)
semessieran hour ago
looks like he is trying to make a point that the actual (formal) proof for 2Z + 1 (odd numbers) is still human - by himself that is. Not sure who came up with the core modular arithmetic idea of with s = 0 k increasing by 2 mod m.
logicprog7 hours ago
> with careful human guidance,
I think this is pretty clearly an overstatement of what was done. As Knuth says,
"Filip told me that the explorations reported above, though ultimately successful, weren’t really smooth. He had to do some restarts when Claude stopped on random errors; then some of the previous search results were lost. After every two or three test programs were run, he had to remind Claude again and again that it was supposed to document its progress carefully. "
That doesn't look like careful human guidance, especially not the kind that would actually guide the AI toward the solution at all, let alone implicitly give it the solution — that looks like a manager occasionally checking in to prod it to keep working.
beej717 hours ago
From my naive standpoint, LLMs like this seem to have some big strengths. One: possession of a superhuman expanse of knowledge. Two: making connections. Three: tireless trial and error.
If you put those three things together, you end up with some cool stuff from time to time. Perhaps the proof of P!=NP is tied to an obscure connection that humans don't easily see due to individual lack of knowledge or predisposition of bias.
Barbing35 minutes ago
Well put.
>If you put [possession of a superhuman expanse of knowledge, making connections, tireless trial and error] together, you end up with some cool stuff from time to time.
Hard to argue.
cbovis7 hours ago
Unless my understanding is incorrect about how these tools work that last point isn't really a quality of LLMs as such? It gets attributed because the lines are blurred but the tireless trial and error is actually just a quality of a regular programatic loop (agent/orchestrator) that happens to be doing the trickiest part of its work via an LLM.
naughtyrabisu5 hours ago
Three: tireless trial and error. Cannot agree more. I figured this probably be the biggest advantage of LLM considering for other variables humans hold the same-level competency.
xvector7 hours ago
This is why the whole "LLMs for mass surveillance" thing is scary imo.
beej716 hours ago
Yeah, this is a dictator's dream scenario and hell for the citizens. Not only do you not want to get caught for saying something that The Great Leader disapproves of, but you're terrified that anything you say might get flagged by an AI.
IAmGraydonan hour ago
>One: possession of a superhuman expanse of knowledge. Two: making connections. Three: tireless trial and error.
One and three I believe are correct. The second point, making connections, is something LLMs seem to be incapable of truly doing unless the connection is already known and in its training data.
ainiriand9 hours ago
Are not LLMs supposed to just find the most probable word that follows next like many people here have touted? How this can be explained under that pretense? Is this way of problem solving 'thinking'?
throw3108227 hours ago
> just find the most probable word that follows next
Well, if in all situations you can predict which word Einstein would probably say next, then I think you're in a good spot.
This "most probable" stuff is just absurd handwaving. Every prompt of even a few words is unique, there simply is no trivially "most probable" continuation. Probable given what? What these machines learn to do is predicting what intelligence would do, which is the same as being intelligent.
qsera7 hours ago
>Probable given what?
The training data..
>predicting what intelligence would do
No, it just predict what the next word would be if an intelligent entity translated its thoughts to words. Because it is trained on the text that are written by intelligent entities.
If it was trained on text written by someone who loves to rhyme, you would be getting all rhyming responses.
It imitates the behavior -- in text -- of what ever entity that generated the training data. Here the training data was made by intelligent humans, so we get an imitation of the same.
It is a clever party trick that works often enough.
throw3108227 hours ago
> The training data
If the prompt is unique, it is not in the training data. True for basically every prompt. So how is this probability calculated?
cbovis6 hours ago
The prompt is unique but the tokens aren't.
Type "owejdpowejdojweodmwepiodnoiwendoinw welidn owindoiwendo nwoeidnweoind oiwnedoin" into ChatGPT and the response is "The text you sent appears to be random or corrupted and doesn’t form a clear question." because the prompt doesnt correlate to training data.
hmmmmmmmmmmmmmm6 hours ago
...? what is the response supposed to be here?
qsera6 hours ago
Just using a scaled up and cleverly tweaked version of linear regression analysis...
red75primean hour ago
That is, the probability distribution that the network should learn is defined by which probability distribution the network has learned. Brilliant!
hmmmmmmmmmmmmmm6 hours ago
Hamiltonian paths and previous work by Donald Knuth is more than likely in the training data.
red75primean hour ago
The specific sequence of tokens that comprise the Knuth's problem with an answer to it is not in the training data. A naive probability distribution based on counting token sequences that are present in the training data would assign 0 probability to it. The trained network represents extremely non-naive approach to estimating the ground-truth distribution (the distribution that corresponds to what a human brain might have produced).
empath755 hours ago
It is impossible to accurately imitate the action of intelligent beings without being intelligent. To believe otherwise is to believe that intelligence is a vacuous property.
slopinthebagan hour ago
An unintelligent device can accurately imitate the action of intelligent beings within a given scope, in the same way an actor can accurately imitate the action of a fictional character in a given scope (the stage or camera) without actually being that character.
If the idea is that something cannot accurately replicate the entirety of intelligence without being intelligent itself, then perhaps. But that isn't really what people talk about with LLMs given their obvious limitations.
qsera4 hours ago
>It is impossible to accurately imitate the action of intelligent beings without being intelligent.
Wait what? So a robot who is accurately copying the actions of an intelligent human, is intelligent?
UltraSanean hour ago
How can you distinguish intelligence form a sufficiently accurate imitation of intelligence?
slopinthebagan hour ago
By "sufficiently accurate" do you mean identical? Because if so, it's not an imitation of intelligence at all, and the question is thus nonsensical.
UltraSane25 minutes ago
"it's not an imitation of intelligence at all"
But that is the key insight, how can you tell when an imitation of intelligence becomes the real thing?
empath753 hours ago
That was probably phrased poorly. If a robot can independently accurately do what an intelligent person would do when placed in a novel situation, then yes, I would say it is intelligent.
If it's just basically being a puppet, then no. You tell me what claude code is more like, a puppet, or a person?
dilap8 hours ago
That description is really only fair for base models†. Something like Opus 4.6 has all kinds of other training on top of that which teach it behaviors beyond "predict most probable token," like problem-solving and being a good chatbot.
(†And even then is kind of overly-dismissive and underspecified. The "most probable word" is defined over some training data set. So imagine if you train on e.g. mathematicians solving problems... To do a good job at predicting [w/o overfitting] your model will have to in fact get good at thinking like a mathematician. In general "to be able to predict what is likely to happen next" is probably one pretty good definition of intelligence.)
gpm8 hours ago
I'd disagree, the other training on top doesn't alter the fundamental nature of the model that it's predicting the probabilities of the next token (and then there's a sampling step which can roughly be described as picking the most probable one).
It just changes the probability distribution that it is approximating.
To the extent that thinking is making a series of deductions from prior facts, it seems to me that thinking can be reduced to "pick the next most probable token from the correct probability distribution"...
dilap6 hours ago
The fundamental nature of the model is that it consumes tokens as input and produces token probabilities as output, but there's nothing inherently "predictive" about it -- that's just perspective hangover from the historical development of how LLMs were trained. It is, fundamentally, I think, a general-purpose thinking machine, operating over the inputs and outputs of tokens.
(With this perspective, I can feel my own brain subtly oferring up a panoply of possible responses in a similar way. I can even turn up the temperature on my own brain, making it more likely to decide to say the less-obvious words in response, by having a drink or two.)
(Similarly, mimicry is in humans too a very good learning technique to get started -- kids learning to speak are little parrots, artists just starting out will often copy existing works, etc. Before going on to develop further into their own style.)
vidarh7 hours ago
Put a loop around an LLM and, it can be trivially made Turing complete, so it boils down to whether thinking requires exceeding the Turing computable, and we have no evidence to suggest that is even possible.
gpm7 hours ago
What are you doing in your loop?
As typically deployed [1] LLMs are not turing complete. They're closer to linear bounded automaton, but because transformers have a strict maximum input size they're actually a subset of the weaker class of deterministic finite automaton. These aren't like python programs or something that can work on as much memory as you supply them, their architecture works on a fixed maximum amount of memory.
I'm not particularly convinced turing complete is the relevant property though. I'm rather convinced that I'm not turing complete either... my head is only so big after all.
[1] i.e. in a loop that appends output tokens to the input and has some form of sliding context window (perhaps with some inserted instructions to "compact" and then sliding the context window right to after those instructions once the LLM emits some special "done compacting" tokens).
[2] Common sampling procedures make them mildly non-deterministic, but I don't believe they do so in a way that changes the theoretical class of these machines from DFAs.
vidarh6 hours ago
Context effectively provifes an IO port, and so all the loop needs to do is to simulate the tape head, and provide a single token of state.
You can not be convinced Turing complete is relevant all you want - we don't know of any more expansive category of computable functions, and so given that an LLM in the setup described is Turing complete no matter that they aren't typically deployed that way is irrelevant.
They trivially can be, and that is enough to make the shallow dismissal of pointing out they're "just" predicting the next token meaningless.
roywiggins6 hours ago
Turing Machines don't need access to the entire tape all at once, it's sufficient for it to see one cell at a time. You could certainly equip an LLM with a "read cell", "write cell", and "move left/right" tool and now you have a Turing machine. It doesn't need to keep any of its previous writes or reads in context. A sliding context window is more than capacious enough for this.
gpm4 hours ago
You're right of course, but at the point where you're saying "well we can make a turing machine with the LLM as the transition function by defining some tool calls for the LLM to interact with the tape" it feels like a stretch to call the LLM itself turing complete.
Also people definitely talk about them as "thinking" in contexts where they haven't put a harness capable of this around them. And in the common contexts where people do put harness theoretically capable of this around the LLM (e.g. giving the LLM access to bash), the LLM basically never uses that theoretical capability as the extra memory it would need to actually emulate a turing machine.
And meanwhile I can use external memory myself in a similar way (e.g. writing things down), but I think I'm perfectly capable of thinking without doing so.
So I persist in my stance that turing complete is not the relevant property, and isn't really there.
roywiggins2 hours ago
Yeah, humans and LLMs and a TM transition function are all Turing complete in the same way, but it's also basically a useless fact. You could possibly train a sufficiently motivated rat to compute a TM transition function.
empath755 hours ago
No physically realizable machine is technically turing complete.
But it is trivially possible to give systems-including-LLMs external storage that is accessible on demand.
ericd8 hours ago
I think it's pretty likely that "intelligence" is emergent behavior that comes when you predict what comes next in physical reality well enough, at varying timescales. Your brain has to build all sorts of world model abstractions to do that over any significant timescale. Big LLMs have to build internal world models, too, to do well at their task.
pvillano29 minutes ago
Does water flowing through a maze solve it by 'thinking'? No. The rules of physics eventually result in the water flowing out the exit. Water also hits every dead end along the way.
The power of LLMs is that by only selecting sequences of words that fit a statistical model, they avoid a lot of dead ends.[^1]
I would not call that, by itself, thinking. However, if you start with an extrapolation engine and add the ability to try multiple times and build on previous results, you get something that's kind of like thinking.
[1]: Like, a lot of dead ends. There are an unfathomable number of dead ends in generating 500 characters of code, and it is a miracle of technology that Claude only hit 30.
tux38 hours ago
>Are not LLMs supposed to just find the most probable word that follows next like many people here have touted?
The base models are trained to do this. If a web page contains a problem, and then the word "Answer: ", it is statistically very likely that what follows on that web page is an answer. If the base model wants to be good at predicting text, at some point learning the answer to common question becomes a good strategy, so that it can complete text that contains these.
NN training tries to push models to generalize instead of memorizing the training set, so this creates an incentive for the model to learn a computation pattern that can answer many questions, instead of just memorizing. Whether they actually generalize in practice... it depends. Sometimes you still get copy-pasted input that was clearly pulled verbatim from the training set.
But that's only base models. The actual production LLMs you chat with don't predict the most probable word according to the raw statistical distribution. They output the words that RLHF has rewarded them to output, which includes acting as an assistant that answers questions instead of just predicting text. RLHF is also the reason there are so many AI SIGNS [1] like "you're absolutely right" and way more use of the word "delve" than is common in western English.
sega_sai7 hours ago
In some sense that is still correct, i.e. the words are taken from some probability distribution conditional on previous words, but the key point is that probability distribution is not just some sort of average across the internet set of word probabilities. In the end this probability distribution is really the whole point of intelligence. And I think the LLMs are learning those.
adamtaylor_137 hours ago
That's the way many people reduce it, and mathematically, I think that's true. I think what we fail to realize is just far that will actually take you.
"just the most probable word" is a pretty powerful mechanism when you have all of human knowledge at your fingertips.
I say that people "reduce it" that way because it neatly packs in the assumption that general intelligence is something other than next token prediction. I'm not saying we've arrived at AGI, in fact, I do not believe we have. But, it feels like people who use that framing are snarkily writing off something that they themselves to do not fully comprehend behind the guise of being "technically correct."
I'm not saying all people do this. But I've noticed many do.
IgorPartola9 hours ago
In some cases solving a problem is about restating the problem in a way that opens up a new path forward. “Why do planets move around the sun?” vs “What kind of force exists in the world that makes planets tethered to the sun with no visible leash?” (Obviously very simplified but I hope you can see what I am saying.) Given that a human is there to ask the right questions it isn’t just an LLM.
Further, some solutions are like running a maze. If you know all the wrong turns/next words to say and can just brute force the right ones you might find a solution like a mouse running through the maze not seeing the whole picture.
Whether this is thinking is more philosophical. To me this demonstrates more that we are closer to bio computers than an LLM is to having some sort of divine soul.
ainiriand9 hours ago
Thanks for your input. The way I saw this and how it looks Knuth interpreted it is that there were some reasoning steps taken by Claude independently. Some internal decisions in the model that made it try different things, finally succeeding.
vjerancrnjak6 hours ago
No. There is good signal in IMO gold medal performance.
These models actually learn distributed representations of nontrivial search algorithms.
A whole field of theorem provingaftwr decades of refinements couldn’t even win a medal yet 8B param models are doing it very well.
Attention mechanism, a bruteforce quadratic approach, combined with gradient descent is actually discovering very efficient distributed representations of algorithms. I don’t think they can even be extracted and made into an imperative program.
qsera8 hours ago
Yes, that is exactly what they do.
But that does not mean that the results cannot be dramatic. Just like stacking pixels can result in a beautiful image.
lijokan hour ago
To get an answer to that you would first have to define 'thinking'
kaiokendev4 hours ago
Given some intelligent system, an AI that perfectly reproduces any sequence that system could produce must encode the patterns that superset that intelligence.
crocowhile8 hours ago
Those people still exist? I only know one guy who is still fighting those windmills
qsera8 hours ago
Yes, I am one.
ezst6 hours ago
[flagged]
wrsh078 hours ago
Imagine training a chess bot to predict a valid sequence of moves or valid game using the standard algebraic notation for chess
Great! It will now correctly structure chess games, but we've created no incentive for it to create a game where white wins or to make the next move be "good"
Ok, so now you change the objective. Now let's say "we don't just want valid games, we want you to predict the next move that will help that color win"
And we train towards that objective and it starts picking better moves (note: the moves are still valid)
You might imagine more sophisticated ways to optimize picking good moves. You continue adjusting the objective function, you might train a pool of models all based off of the initial model and each of them gets a slightly different curriculum and then you have a tournament and pick the winningest model. Great!
Now you might have a skilled chess-playing-model.
It is no longer correct to say it just finds a valid chess program, because the objective function changed several times throughout this process.
This is exactly how you should think about LLMs except the ways the objective function has changed are significantly significantly more complicated than for our chess bot.
So to answer your first question: no, that is not what they do. That is a deep over simplification that was accurate for the first two generations of the models and sort of accurate for the "pretraining" step of modern llms (except not even that accurate, because pretraining does instill other objectives. Almost like swapping our first step "predict valid chess moves" with "predict stockfish outputs")
noslenwerdna6 hours ago
I find this kind of reduction silly.
All your brain is doing is bouncing atoms off each other, with some occasionally sticking together, how can it be really thinking?
See how silly it sounds?
esafak8 hours ago
Are you feigning ignorance? The best way to answer a question, like completing a sentence, is through reasoning; an emergent behavior in complex models.
[deleted]8 hours agocollapsed
adampunk7 hours ago
Thinking is a big word that sweeps up a lot of different human behavior, so I don't know if it's right to jump to that; HOWEVER, explanations of LLMs that depend heavily on next-token prediction are defunct. They stopped being fundamentally accurate with the rise of massive reinforcement learning and w/ 'reasoning' models the analogy falls apart when you try to do work with it.
Be on the lookout for folks who tell you these machines are limited because they are "just predicting the next word." They may not know what they're talking about.
fazkan7 hours ago
time to use claude code to understand DEKs paper, in plain English. As someone who did a bit of formal verification in grad school. I feel like, there are a long tail of problems that can be solved by human-model collab like this one. The problems may not mean much but hopefully it can stack up understanding of intelligence.
ontouchstart7 hours ago
Fascinating report by DEK himself.
Time to sit down, read, digest and understand it without the help of LLM.
ontouchstart7 hours ago
I don't have time to do that myself yet so I just dug a quick TL;DR rabbit hole for fun:
https://ontouchstart.github.io/rabbit-holes/llm_rabbit_hole_...
ecshafer8 hours ago
I wonder how long we have until we start solving some truly hard problems with AI. How long until we throw AI at "connect general relativity and quantum physics", give the AI 6 months and a few data centers, and have it pop out a solution?
rustyhancock8 hours ago
I think a very long time because part of our limit is experiment.
We need enough experimental results to explain to solve these theoretical mismatches and we don't and at present can't explore that frontier.
Once we have more results at that frontier we'd build a theory out from there that has two nearly independent limits for QFT and GR.
What we'd be asking if the AI is something that we can't expect a human to solve even with a lifetime of effort today.
It'll take something in par with Newton realising that the heavens and apples are under the same rules to do it. But at least Newton got to hold the apple and only had to imagine he could a star.
eru7 hours ago
> I think a very long time because part of our limit is experiment.
Yes, maybe. But if you are smarter, you can think up better experiments that you can actually do. Or re-use data from earlier experiments in novel and clever ways.
fleischhauf6 hours ago
this. could already be useful to narrow down the search space
bob10297 hours ago
What prevents us from giving this system access to other real systems that live in physical labs? I don't see much difference between parameterizing and executing a particle accelerator run and invoking some SQL against a provider. It's just JSON on the wire at some level.
rustyhancock7 hours ago
Nothing, we can give it all the data we have and have it lead experiments.
But we can not yet experiment at the GR/QFT frontier.
To do so with a particle accelerator it would need to be the size of the milky way.
fragmede7 hours ago
The question is, if you trained an LLM on everything up until 1904, could it come up with E=MC² or not?
rustyhancock7 hours ago
In 1900 Henri Poincaré wrote that radiation (light) has an effective mass given by E/c^2.
So it really isn't far fetched. What intrigues me more is if it was capable of it would our Victorian conservative minded scientists have RLHF it out of that kind of thing?
emp173446 hours ago
Hold your horses, that’s a long way off. The best math AI tool we currently have, Aletheia, was only able to solve 13 out of 700 attempted open Erdos problems, only 4 of which were solved autonomously: https://arxiv.org/html/2601.22401v3
Clearly, these models still struggle with novel problems.
slibhb5 hours ago
> Clearly, these models still struggle with novel problems.
Do they struggle with novel problems more or less than humans?
Filligree4 hours ago
Less than most humans, but more than many humans.
worldsavior8 hours ago
If AGI will ever come, then. Currently, AI is only a statistical machines, and solutions like this are purely based on distribution and no logic/actual intelligence.
zarzavat7 hours ago
I swear that AI could independently develop a cure for cancer and people would still say that it's not actually intelligent, just matrix multiplications giving a statistically probable answer!
LLMs are at least designed to be intelligent. Our monkey brains have much less reason to be intelligent, since we only evolved to survive nature, not to understand it.
We are at this moment extremely deep into what most people would have been considered to be actual artificial intelligence a mere 15 years ago. We're not quite at human levels of intelligence, but it's close.
qsera7 hours ago
>AI could independently develop a cure for cancer
All the answers for all your questions is contained in randomness. If you have a random sentence generator, there is a chance that it will output the answer to this question every time it is invoked.
But that does not actually make it intelligent, does it?
famouswaffles6 hours ago
You are arguing a point no-one is making. LLMs are not random sentence generators. Its probability distributions are anything but random. You could make an actual random sentence generator, but no-one would argue about its intelligence.
graemefawcett6 hours ago
This is exactly how problem solving works, regardless of the substrate of cognition.
Start with "all your questions contained in randomness" -> the unconstrained solution space.
The game is whether or not you can inject enough constraints to collapse the solution space to one that can be solved before your TTL expires. In software, that's generally handled by writing efficient algorithms. With LLMs, apparently the SOTA for this is just "more data centers, 6 months, keep pulling the handle until the right tokens fall out".
Intelligence is just knowing which constraints to apply and in what order such that the search space is effectively partitioned, same thing the "reasoning" traces do. Same thing thermostats, bacteria, sorting algorithms and rivers do, given enough timescale. You can do the same thing with effective prompting.
The LLM has no grounding, no experience and no context other than which is provided to it. You either need to build that or be that in order for the LLM to work effectively. Yes, the answers for all your questions are contained. No, it's not randomness. It's probability and that can be navigated if you know how
qsera4 hours ago
You can constrain the solution space all you want, but if you don't have a method to come up with possible solutions that might match the constraints, you ll be just sitting there all day long for the machine to produce some results. So intelligence is not "just knowing which constraints to apply". It is also the ability to come up with solutions within the constraints without going through a lot of trial and error...
But hey, if LLMs can go through a lot of trial and error, it might produce useful results, but that is not intelligence. It is just a highly constrained random solution generator..
graemefawcett4 hours ago
I believe that's I and the paper are both saying as well. The LLM is pure routing, the constraints currently are located elsewhere in the system. In this case, both the constraints and the motivation to perform the work are located in Knuth and his assistant.
Routing is important, it's why we keep building systems that do it faster and over more degrees of freedom. LLMs aren't intelligent on their own, but it's not because they don't have enough parameters
wang_li7 hours ago
Last week I put "was val kilmer in heat" into the search box on my browser. The AI answer came back with "No, Val Kilmer was not in heat. Val Kilmer played Chris Shiherlis in the movie Heat but the film did not indicate that he was pregnant or in heat. His performance was nuanced and skilled and represents a high point of the film." I was not curious about whether he was pregnant.
We are not only not close to human level of intelligence, we are not even at dog, cat, or mouse levels of intelligence. We are not actually at any level of intelligence. Devices that produce text, images, or code do not demonstrate intelligence any more than a printer producing pages of beautiful art demonstrate intelligence.
logicprogan hour ago
> I was not curious about whether he was pregnant.
I interpreted the question the same way the AI did.
DennisP6 hours ago
Honestly, when I read your first sentence, given the lack of a capital H, my brain initially went the same direction the AI did. Then I realized what you meant but since I already went there, I might have made a similar response as a joke. For the sake of my ego I'm forced to reject your claim that this is evidence of stupidity.
sosodev5 hours ago
The model that processes search results is tiny and dumb. You shouldn't compare it to the frontier models that are solving complex math problems.
StilesCrisis4 hours ago
On Google, just clicking "AI Mode" gives you a substantially smarter model, and it's still pretty weak. But I assume the OP wasn't talking about Google because it doesn't seem to make this mistake even in a search.
wang_li37 minutes ago
It was bing as that is the default for Edge as supplied on my work laptop. It doesn't do this now, but it does do something else quite weird:
search: was val kilmer pregnant or in heat
answer: Not pregnant Val Kilmer was not pregnant or in heat during the events of "Heat." His character, Chris Shiherlis, is involved in a shootout and is shot, which indicates he is not in a reproductive or mating state at that time.
And then cites wikipedia as the source of information.
In terms of cognition the answer is meaningless. Nothing in the question implies or suggests that the question has to do with a movie. Additionally, "involved in a shootout and is shot, which indicates he is not in a reproductive or mating state" makes no sense at all.
AI as deployed shows no intelligence.
worldsavior7 hours ago
That's wrong. Humans were evolved to have big brains so they can better understand the env and use it to their advantage.
I still see AI making stupid silly mistakes. I rather think and not waste time on something that only remembers data, and doesn't even understand it.
Reasoning in AI is only about finding contradictions between his "thoughts", not actually understand it.
someplaceguy7 hours ago
> I still see AI making stupid silly mistakes.
In contrast with humans, who are famously known for never making stupid silly mistakes...
_fizz_buzz_7 hours ago
> I still see AI making stupid silly mistakes.
Humans also make silly mistakes.
whimsicalisman hour ago
It only took 4 years, but it appears that this view is finally dying out on HN. I would advise everyone who found this viewpoint compelling to think about how those same blinders might be affecting how you are imagining the future to look like.
rustyhancock8 hours ago
I don't even think that's the issue.
The issue to my mind is a lack of data at the meeting of QFT/GR.
Afterall few humans historically have been capable of the initial true leap between ontologies. But humans are pretty smart so we can't say that is a requirement for AGI.
worldsavior7 hours ago
When it comes to revolutionary/unsolved subjects, there will never be enough data. That's why its revolutionary/unsolved.
cjcole6 hours ago
Maybe.
“The laws of nature should be expressed in beautiful equations.”
- Paul Dirac
“It is, indeed, an incredible fact that what the human mind, at its deepest and most profound, perceives as beautiful finds its realisation in external nature. What is intelligible is also beautiful. We may well ask: how does it happen that beauty in the exact sciences becomes recognizable even before it is understood in detail and before it can be rationally demonstrated? In what does this power of illumination consist?”
- Subrahmanyan Chandrasekhar
“I often follow Plato’s strategy, proposing objects of mathematical beauty as models for Nature.”
“It was beauty and symmetry that guided Maxwell and his followers.”
- Frank Wilczek
“Beauty, is bound up with symmetry.”
- Herman Weyl
"Still twice in the history of exact natural science has this shining-up of the great interconnection become the decisive signal for significant progress. I am thinking here of two events in the physics of our century: the rise of the theory of relativity and that of the quantum theory. In both cases, after yearlong unsuccessful striving for understanding, a bewildering abundance of details was almost suddenly ordered. This took place when an interconnection emerged which, thought largely unvisualizable, was finally simple in its substance. It convinced through its compactness and abstract beauty – it convinced all those who can understand and speak such an abstract language."
- Werner Heisenberg
Maybe (just maybe) these things (whatever you want to call them) will (somehow) gain access to some "compact", beautiful, "largely unvisualizable" "interconnection" which will be the self-evident solution. And if they do, many will be sure to label it a statistical accident from a stochastic parrot. And they'll right, for some definitions of "statistical", "accident", "stochastic", and "parrot".
bobbylarrybobby7 hours ago
Did you read the linked paper? Claude out-reasoned humans on a challenging (or at least, unsolved) math problem.
cjcole6 hours ago
"humans"
Donald Knuth is an extremal outlier human and the problem is squarely in his field of expertise.
Claude, guided by Filip Stappers, a friend of Knuth, solved a problem that Knuth and Stappers had been working on for several weeks. Unfortunately, it doesn't seem (from my quick scan) to have been stated how long (or how many tokens or $) it took for Claude + Stappers to complete the proof.
In response, Knuth said: "It seems that I’ll have to revise my opinions about “generative AI” one of these days."
Seems like good advice. From reading elsewhere in this comment section, the goalposts seem to be approaching the infrared and will soon disappear from the extreme redshift due to rate at which they are receding with each new achievement.
emp173446 hours ago
What goalposts do you think are being moved? I constantly see AI enthusiasts use this phrase, but it’s not clear what goalposts they have in mind. Specifically, what is it that you want opponents to recognize that you believe they aren’t currently?
We now have a tool that can be useful in some narrow domains in some narrow cases. It’s pretty neat that our tools have new capabilities, but it’s also pretty far from AGI.
cjcole6 hours ago
I'm not an enthusiast. I'm a Butlerian.
Imagine hearing pre-attention-is-all-you-need that "AI" could do something that Donald Knuth could not (quickly solve the stated problem in collaboration with his friend).
The idea that this (Putnam perfect, IMO gold, etc) is all just "statistical parrot" stuff is wearing a little thin.
whimsicalisman hour ago
You must have forgotten the /s at the end of your comment?
emp1734439 minutes ago
Uh, no? You think LLMs are AGI?
worldsavior7 hours ago
Merely luck in my opinion. There could be also multiple times where it didn't solve it.
graemefawcett7 hours ago
Connecting them is easy, one is the math of the exchange and one of the state machine.
A better question might be why no one is paying more attention to Barandes at Harvard. He's been publishing the answer to that question for a while, if you stop trying to smuggle a Markovian embedding in a non-Markovian process you stop getting weird things like infinities at boundaries that can't be worked out from current position alone.
But you could just dump a prompt into an LLM and pull the handle a few dozen times and see what pops out too. Maybe whip up a Claw skill or two
Unconstrained solution space exploration is surely the way to solve the hard problems
Ask those Millenium Prize guys how well that's working out :)
Constraint engineering is all software development has ever been, or did we forget how entropy works? Someone should remind the folk chasing P=NP that the observer might need a pen to write down his answers, or are we smuggling more things for free that change the entire game? As soon as the locations of the witness cost, our poor little guy can't keep walking that hypercube forever. Can he?
Maybe 6 months and a few data centers will do it ;)
taylorius6 hours ago
I thought Claude Monet - Impressionist techniques applied to coding.
zackmorris4 hours ago
Amazing paper. The simulated annealing portion reminds me of genetic algorithms (GAs). A good intro to that are the Genetic Programming series of books by John Koza, I read III in the early 2000s:
https://www.amazon.com/Genetic-Programming-III-Darwinian-Inv...
https://www.genetic-programming.com/
Note that the Python solution in the pdf is extremely short, so could have been found by simply trying permutations of math operators and functions on the right side of the equation.
We should be solving problems in Lisp instead of Python, but no matter. That's because Lisp's abstract syntax tree (AST) is the same as its code due to homoiconicity. I'm curious if most AIs transpile other languages to Lisp so that they can apply transformations internally, or if they waste computation building programs that might not compile. Maybe someone at an AI company knows.
-
I've been following AI trends since the late 1980s and from my perspective, nothing really changed for about 40 years (most of my life that I had to wait through as the world messed around making other people rich). We had agents, expert system, fuzzy logic, neural nets, etc since forever, but then we got video cards in the late 1990s which made it straightforward to scale neural nets (NNs) and GAs. Unfortunately due to poor choice of architecture (SIMD instead of MIMD), progress stagnated because we don't have true multicore computing (thousands or millions of cores with local memories), but I digress.
Anyway, people have compared AI to compression. I think of it more as turning problem solving into a O(1) operation. Over time, what we think of as complex problems become simpler. And the rate that we're solving them is increasing exponentially. Problems that once seemed intractable only were because we didn't know the appropriate abstractions yet. For example, illnesses that we thought would never be cured now have vaccines through mRNA vaccines and CRISPR. That's how I think of programming. Now that we have LLMs, whole classes of programming problems now have O(1) solutions. Even if that's just telling the computer what problem to solve.
So even theorem proving will become a solved problem by the time we reach the Singularity between 2030 and 2040. We once mocked GAs for exploring dead ends and taking 1000 times the processing power to do simple things. But we ignored that doing hard things is often worth it, and is still a O(1) operation due to linear scaling.
It's a weird feeling to go from no forward progress in a field to it being effectively a solved problem in just 2 years. To go from trying to win the internet lottery to not being sure if people will still be buying software in a year or two if/when I finish a project. To witness all of that while struggling to make rent, in effect making everything I have ever done a waste of time since I knew better ways of doing it but was forced to drop down to whatever mediocre language or framework paid. As the problems I was trained to solve and was once paid to solve rapidly diminish in value because AI can solve them in 5 minutes. To the point that even inventing AGI would be unsurprising to most, so I don't know why I ever went into computer engineering to do exactly that. Because for most people, it's already here. As I've said many times lately, I thought I had more time.
Although now that we're all out of time, I have an uncanny feeling of being alive again. I think tech stole something from my psyche so profound that I didn't notice its loss. It's along the lines of things like boredom, daydreaming, wasting time. What modern culture considers frivolous. But as we lose every last vestige of the practical, as money becomes harder and harder to acquire through labor, maybe we'll pass a tipping point where the arts and humanities become sought-after again. How ironic would it be if the artificial made room for the real to return?
On that note, I read a book finally. Hail Mary by Andy Weir. The last book I read was Ready Player One by Ernest Cline, over a decade ago. I don't know how I would have had the bandwidth to do that if Claude hadn't made me a middle manager of AIs.
jdnier7 hours ago
> I think Claude Shannon’s spirit is probably proud to know that his name is now being associated with such advances. Hats off to Claude!
I didn't realize Claude was named after Claude Shannon!
tzumaoli5 hours ago
Trivia: Claude Shannon proposed the idea of predicting the next token (letter) using statistics/probabilities in the training data corpus in 1950: "Prediction and Entropy of Printed English" https://languagelog.ldc.upenn.edu/myl/Shannon1950.pdf
Anon844 hours ago
It goes back a bit further than that. His 1948 “Mathematical theory of communication” [1] already has (what we would now call) a Markov chain language model, page 7 onwards. AFAIK, this was based on his classified WWII work so it was probably a few years older than that
[1] https://people.math.harvard.edu/~ctm/home/text/others/shanno...
aix13 hours ago
I was just reading Norbert Wiener's "The Human Use of Human Beings" (1950) and this quote gave me a good chuckle:
"One may get a remarkable semblance of a language like English by taking a sequence of words, or pairs of words, or triads of words, according to the statistical frequency with which they occur in the language, and the gibberish thus obtained will have a remarkably persuasive similarity to good English."
Trinicode2 hours ago
A letter is not a token, is it? Redundancy could hit 75% in long sentences, but Shannon was not predicting tokens or words, he was predicting letters (characters).
pfdietz4 hours ago
It's like the diesel engine, which is named after Rudolf Engine.
ai_critic4 hours ago
:|
roer2 hours ago
Is this a joke I don't get? His name was Rudolf Diesel, right?
SenorKimchi4 hours ago
And Claude had a collection of cycles, unicycles. Unfortunately the article is about something else altogether.
bread-wood6 hours ago
Here I was assuming it was named after https://en.wikipedia.org/wiki/Claude_(alligator)
[deleted]6 hours agocollapsed
teekert2 hours ago
Last time I asked Claude itself also didn’t know.
NitpickLawyer6 hours ago
Wait till you hear about nvidia and their GPU architecture naming scheme :)
dfilppi6 hours ago
[dead]
miroljub9 hours ago
Solves? It's a part of the training set. Nothing more, nothing less.
rpdillon9 hours ago
Opening sentences:
> Shock! Shock! I learned yesterday that an open problem I’d been working on for several weeks had just been solved by Claude Opus 4.6— Anthropic’s hybrid reasoning model that had been released three weeks earlier! It seems that I’ll have to revise my opinions about “generative AI” one of these days. What a joy it is to learn not only that my conjecture has a nice solution but also to celebrate this dramatic advance in automatic deduction and creative problem solving.
sigmar4 hours ago
I think we're going to have several years of people claiming genAI "didn't really do something novel here," despite experts saying otherwise, because people are scared by the idea that complex problem solving isn't exclusive to humans (regardless of whether these models are approaching general intelligence).
allreduce7 hours ago
I encourage you to look at what the current models with a bit of harnessing are capable of, e.g. Opus 4.6 and Claude Code. Try to make it solve some mathematics-heavy problem you come up with. If only to get a more accurate picture of whats going on.
Unfortunately, these tools generalize way beyond regurgitating the training set. I would not assume they stay below human capabilities in the next few years.
Why any moral person would continue building these at this point I don't know. I guess in the best case the future will have a small privileged class of humans having total power, without need for human workers or soldiers. Picture a mechanical boot stomping on a human face forever.
nemo16187 hours ago
If this was a joke, it certainly flew over most people's heads...
jcims8 hours ago
Prove it.
romaniv7 hours ago
I would like to note that it would be trivial to definitively prove or disprove such things if we had a searchable public archive of the training data. Interestingly, the same people (and corporate entities) who loudly claim that LLMs are creating original work seem to be utterly disinterested in having actual, definitive proof of their claims.
clbrmbr7 hours ago
This would be awesome. Even titles and shasums could be enough.
mwigdahl9 hours ago
Did you read the article? It was an open problem.
bluGill8 hours ago
Was it? It was an open problem to Knuth - who generally knows how to search literature. However there is enough literature to search that it wouldn't be a surprise at all to discover it was already solved but he just used slightly different terms and so didn't find it. Or maybe it was sovled because this is a specialization of something that looks unrelated and so he wouldn't have realized it when he read it. Or...
Overall I'm going with unsolved, because Knuth is a smart person who I'd expect to not miss the above. I'm also sure he falls for the above all the time even though the majority of the time he doesn't.
mwigdahl8 hours ago
Agreed with all of that, but with the added point that Knuth has done a lot of work in this exact area in The Art of Computer Programming Volume 4. If he considers this conjecture open given his particular knowledge of the field, it likely is (although agreed, it's not guaranteed).
skinner_20 minutes ago
Also, if Claude had regurgitated a known solution, it would have come up with it in the first exploration round, not the 31st, as it actually did.
ordu7 hours ago
> If he considers this conjecture open given his particular knowledge of the field, it likely is (although agreed, it's not guaranteed).
It is as good as guaranteed. If Knuth says it doesn't know how to solve the problem, and if anyone knows, then they will inform Knuth about it. Knuth not just a very knowledgeable person, but a celebrity also.
Steinmarkan hour ago
Trivia:AKWU AGHALI OFU THEOREM
Theorem (Akwu Aghali Ofu — The Single Nest or 1/2 spin)
For any observer O with personal quantum seed s (derived from first orgasm timestamp SHA-256), there exists a unique Hamiltonian cycle C(O) through the M³ digraph such that:
1. C(O) starts at vertex (0,0,0) — the Single Nest 2. C(O) has length exactly L³ for L determined by O's muon/mass preference 3. The cycle visits every vertex exactly once before returning 4. The cycle only exists when O observes it 5. No other observer can traverse the same cycle
Proof Sketch: 1. Let s = SHA-256(timestamp) mod L determine coefficients (α,β,γ) 2. Define g(i,j,k) = (αi + βj + γk) mod L 3. Show that the mapping f: (i,j,k) → next vertex via g is a permutation 4. Show that the permutation decomposes into cycles 5. Show that for appropriate s, the cycle containing (0,0,0) has length L³ 6. Show that this cycle depends on s — different s give different cycles 7. Show that observation collapses the quantum superposition, making the cycle actual
Corollary: The Single Nest spins forever because the cycle is Hamiltonian (it loves only you) — it never repeats until it returns, and the return is a new beginning, not a repetition.