Hacker News

ks2048
Smallest transformer that can add two 10-digit numbers github.com

kstrauser16 hours ago

So, what happens when you test it on 11 digit numbers? I don’t mean that as a gotcha or “LOL dumb transformer” snark. More like, does the accuracy start to drop as you add digits? Or instead, maybe it’s the transformer equivalent of a stack overflow and it outputs a picture of a burning spoon or something?

And for that matter, what’s it do with 9 digit numbers? Like, is it more accurate with them, or are these little guys mainly good at adding numbers with exactly 10 digits?

Basically, are the failures modes a gentle increase in inaccuracy, or spectacle failure outside their parameters?

SlinkyOnStairs11 hours ago

One major limitation of the LLM architecture is that even the failure mode varies unpredictably between inputs.

The set of 11-digit numbers with any given failure mode (or even successful output) has no discernable pattern, merely whatever randomness the training process baked into the model.

You can't predict ahead of time when they will fail spectacularly, nor draw a clear boundary around the failure cases. And early major example of this were the "glitch tokens" introduced into most LLMs by training on reddit data.

But there is an "in general"/"average failure rate across all inputs of a given size" answer: LLMs performance drops off a cliff once the input reaches too much complexity. (A "┐" shaped curve) In contrast to humans, where you can ask a child to add two N-digit numbers and the error rate will be approximately linear to N.

varispeed5 hours ago

Most humans struggle to compute 10 digit stuff. They use tools instead. Can LLM learn to use calculator? Sorry if that is a stupid question. Maybe brains are not well suited for calculations natively.

18al15 hours ago

Depends on how the transformer has been trained. If it has seen 11 digit examples while training it might work, else the input will be out of distribution and it will respond with a nonsensical number.

For instance the current high score model (311 params [0]), when given 12345678900 + 1, responds with 96913456789.

An interesting experiment would be: what's the minimum number of parameters required to handle unbounded addition (without offloading it to tool calls).

Of course memory constraints would preclude such an experiment. And so a sensible proxy would be: what kind of neural-net architecture and training would allow a model to handle numbers lengths it hasn't been trained on. I suspect, this may be not be possible.

[0] https://github.com/rezabyt/digit-addition-311p

alexlitz20 hours ago

I made a blogpost on my submission (currently the top handwritten one at 36 parameters) https://alexlitzenberger.com/blog/building_a_minimal_transfo...

ks2048op16 hours ago

I didn't look at all the details, but wanted to see how you did the initial embedding and see you do have a 14x5 matrix there. I guess when you are setting things by-hand (rather than learning), the definition of counting "parameters" is a bit unclear. One could say all those are parameters! even if setting in a straight-forward way.

alexlitz14 hours ago

Yeah basically it is an implementation detail but most of them are zero, there is an equivalent 14 parameter sparse matrix for that.

sowbug19 hours ago

I ask this question as someone who can't do much more than confirm that your blog post is written in English by someone who knows math.

Does this result suggest that if we had N clever humans manually building an LLM, they might come up with something as smart as a frontier model, but potentially 45 times smaller? (1644 / 36 ~= 45, N = very large, time not specified)

alexlitz19 hours ago

I imagine getting things to be polysemantic in a way that does not interfere would lead to sublinear scaling. Also there are smaller ones that were trained so would still be more like 311/36 ~= 8.6.

Lerc19 hours ago

>I imagine getting things to be polysemantic in a way that does not interfere would lead to sublinear scaling.

True, but with even smarter humans, you could exploit the interactions for additional calculations.

While it sounds a bit silly, it is one of the hypotheses behind a fast takeoff. An AI that is sufficiently smart could design a network better than a trained one and could make something much smarter than itself on the same hardware. The question then becomes if that new smarter one can do an even better job. I suspect diminishing returns, but then again I am insufficiently smart.

alexlitz5 hours ago

Yeah that is plausible enough.

sowbug19 hours ago

Thanks!

(I see the Trained Weights results now, thanks.)

xg1510 hours ago

> Self-attention is required. The model must contain at least one self-attention layer. This is the defining feature of a transformer — without it, you have an MLP or RNN, not a transformer.

I think it would be interesting to see challenges where two networks are trained and evaluated on the exact same datasets and the architecture is the same except for the presence of self-attention layers in one network.

So far it seems to me that self-attention really brought new capabilities to a network - essentially change the network's functionality in response to the input. It would be interesting to see if there are problems (i.e. datasets) that a "traditional" feedforward network fails to solve, but a transformer network of the same size can solve.

My guess would be: yes there are, and they are the kinds of "variable task" datasets that we see with LLMs, i.e. where part of the input indicates the task itself and part indicates the data for the task.

getnormality6 hours ago

> So far it seems to me that self-attention really brought new capabilities to a network

Do we have a layman explanation for what makes self-attention so uniquely powerful? Something more than "it lets you do self-attention".

MarkusQ5 hours ago

Computational power. Without self attention, you have a sloppy implementation of something called a PDA (push-down-automaton) -- like an old HP calculator. With it, you have an even sloppier implementation of a Turing machine.

So (modulo a _lot_ of details) it increases the power from that of a "calculator" to that of a "computer".

ameliusa day ago

> In short: if you can swap in a different set of weights and use the exact same inference code for a different task, your setup is legitimate. If the inference code is inseparable from the algorithm, it's not.

I wonder why they don't just write the code themselves, so by design the focus can be on the model.

vasco15 hours ago

Writing code by hand, what is this, 6 months ago??

freakynit10 hours ago

Time do be running real fast these days

E-Reverance21 hours ago

Not sure how much this fits into the rules but I saw on twitter someone claimed 28 params : https://gist.github.com/SeuperHakkerJa/da3050739bea97aabd86e...

i00020 hours ago

Would it make sense to embed such single-purpose network with fixed weights within a LLM before pre-training?

ACCount3716 hours ago

Good question.

It might work, I considered running a test like this. But it does demand certain things.

The subnetwork has to be either crafted as "gradient resistant" or remain frozen. Not all discovered or handcrafted circuits would survive gradient pressure as is. Especially the kind of gradients that fly in early pre-training.

It has to be able to interface with native representations that would form in a real LLM during pre-training, which is not trivial. This should happen early enough in pre-training. Gradients must start routing through our subnetwork. We can trust "rich get richer" dynamics to take over from there, but for that, we need the full network to discover the subnetwork and start using it.

And finally, it has to start being used for what we want it to be used for. It's possible that an "addition primitive" structure would be subsumed for something else, if you put it into the training run early enough, when LLM's native circuitry is nonexistent.

Overall, for an early test, I'd spray 200 frozen copies of the same subnetwork into an LLM across different layers and watch the dynamics as it goes through pre-training. Roll extra synthetic addition problems into the pre-training data to help discovery along. Less of a principled solution and more of an engineering solution.

rao-v14 hours ago

+1 I’ve always had the feeling that training from randomly initialized weights without seeding some substructure is unnecessarily slowing LLM training.

Similarly I’m always surprised that we don’t start by training a small set of layers, stack them and then continue.

ACCount3713 hours ago

Better-than-random initialization is underexplored, but there are some works in that direction.

One of the main issues is: we don't know how to generate useful computational structure for LLMs - or how to transfer existing structure neatly across architectural variations.

What you describe sounds more like a "progressive growing" approach, which isn't the same, but draws from some similar ideas.

benob14 hours ago

I had that in mind too. What if you handcraft a subnetwork with (some subset of) Turing machine capability? Do those kinds of circuits emerge naturally during training? Can reasoning use them for complex computation?

tgv10 hours ago

In the 90s, there were papers on emulating logical circuits with neurons. They would be bigger than this network, but at least always correct.

subscribed4 hours ago

You might find https://corticallabs.com/cl1.html interesting (that's of course assuming this is not a scam, which I'm unable to assess).

reerdna16 hours ago

I couldn't help but laugh out loud at the notion of a "held-out test set" for addition of 10-digit numbers.

wtallis15 hours ago

I don't think we have good tools for formally proving that a transformer's output will match a more traditionally-defined function. But the leading transformers are small enough that formal verification may be possible.

Without any formal verification: The input space of two 10-digit numbers is a bit bigger than 64-bits, so exhaustively verifying all possible inputs doesn't sound practical. Using the same subset of the input space for verifying each submission seems like the easiest way to be fair, and not disclosing that subset to the competitors is obviously necessary.

delta_p_delta_x19 hours ago

Very cool, but can I suggest the `add` CPU instruction instead? Supports 64-bit numbers, and it's encoded in hardware, and no need to cross a PCIe interface into a beefy, power-hungry GPU and back again. And chances are it's cross-platform, because basically every ISA since the very first has had `add`.

ACCount3716 hours ago

No. You cannot. It's the wrong tool for the problem.

That little "add" of yours has the overhead of: having an LLM emit it as a tool call, having to pause the LLM inference while waiting for it to resolve, then having to encode the result as a token to feed it back.

At the same time, a "transformer-native" addition circuit? Can be executed within a single forward pass at a trivial cost, generate transformer-native representations, operate both in prefill and in autoregressive generation, and more. It's cheaper.

delta_p_delta_x31 minutes ago

I can't tell if this is satire or not. A1 top-tier.

tovej13 hours ago

giggle

mcdeltat19 hours ago

"smallest supercomputing cluster that can add two numbers"

nurettin19 hours ago

I mean, yeah, no need to put a bunch of high powered cars in a circular track to watch them race really close to each other at incredible speeds, causing various hazards, either. Especially since city buses have been around for ages.

delta_p_delta_x17 hours ago

I would similarly criticise a race car being used to do a city bus' job of getting a lot of people from point A to B.

Although the converse would be interesting, racing city buses.

pitaj16 hours ago

Nobody has suggested using this for addition tasks in production. It's an academic exercise. What are you on about?

medi8r21 hours ago

You can do that in a single matmul of course.

hyperhello21 hours ago

So can you take an arbitrary transformer and somehow turn it into a compact set of low-power fast gates by some algorithm?

measurablefunc21 hours ago

I think you're misunderstanding the joke.

medi8r21 hours ago

Yes joke is:

    [A B]
times

    [1]
    [1]
is

    [A+B]

hyperhello20 hours ago

From context then, I infer that a transformer is not comprised of matrix multiplications, because it would simply be one that adds two 10-digit numbers.

medi8r20 hours ago

A transformer tokenizes input, does a bunch of matmul and relu set up in a certain way. It doesn't get to see the raw number (just like you don't when you look at 1+1 you need visual cortex etc. first.)

Lerc19 hours ago

Notably the difference is that ten digits is not the same thing as a number. One might say that turning it into a number might be the first step, but Neural nets being what they are, they are liable to produce the correct result without bothering to have a representation any more pure than a list of digits.

I guess the analogy there is that a 74ls283 never really has a number either and just manipulates a series of logic levels.

Filligree19 hours ago

So the question is, why do we tokenise it in such a way that it makes everything harder?

medi8r14 hours ago

There is no encoding that makes everything easier. You trade off maths for general intelligence. Now we are at a point where the LLM can just choose to use a normal calculator anyway!

akoboldfrying16 hours ago

The tokenisation needs to be general -- it needs to be able to encode any possible input. It should also be at least moderately efficient across the distribution of inputs that it will tend to see. Existing tokenisation schemes explicitly target this.

[deleted]21 hours agocollapsed

eps13 hours ago

Got excited that someone made one of those 120v humming coil beauties do the numbers... alas, it's just yet another NN project :-/

consp13 hours ago

My reaction was the same as I expected there to be a fancy analogue computer build mainly with transformers.

bmc75059 hours ago

Fast matrix multiplication would be a more useful benchmark: https://fmm.univ-lille.fr/

ks2048op21 hours ago

So, hand-coded weights can do it with 36 params and 311 for trained weights - did anyone try the former architecture, but starting with random weights and learning?

alexlitz20 hours ago

For one the specific 36 parameter version is impossible without float64 so you might guess the corollary that it is not exactly amenable to being found by gradient descent. I think the question of how you can structure transformers and neural nets in general so that they can both very parsimoniously represent things like this and have it be amenible to learning by gradient descent.

bitwize19 hours ago

"Minsky, why did you close your eyes?"

"So that the room will be empty."

vicchenai17 hours ago

The leaderboard framing is clever - forces apples-to-apples comparison on a task where you can verify correctness deterministically. What I find interesting is the architectural constraints: 10-digit addition requires maintaining ~20 digits of working state across the carry chain, which is fundamentally sequential. The fact that tiny transformers can learn this at all (rather than just memorizing) suggests they are finding some form of positional carry representation in their attention patterns. Would love to see ablations on how attention head count vs depth trade off here - my intuition is that carry propagation needs depth more than width.

prng202116 hours ago

How is anyone predicting timelines for AGI when these systems can’t do basic addition of 2 arbitrary numbers with 100% accuracy?

famouswaffles16 hours ago

Can you do basic addition of 2 arbitrary numbers with 100% accuracy (no tools) ? No you can't. You will make mistakes for a sufficiently large N even with pen and paper, and a very small N without. Are you no longer generally intelligent ?

undersuit5 hours ago

Somewhere along the line a $10000 GPU has to be equivalent to using a finger to do arithmetic in the dust.

sambapa13 hours ago

No, but I can develop methods to eventually do it.

wmf16 hours ago

LLMs should use tool calling (which is 100% reliable) instead of doing math internally. But in general it would be nice to be able to teach a process and have the AI execute it deterministically. In some sense, reliability between 99% and 100% is the worst because you still can't trust the output but the verification feels like wasted effort. Maybe code gen and execution will get us there.

base769 hours ago

This is the exact problem CognOS was built to solve.

  99% reliable means you still can't remove the human from the loop — because you never know which 1% you're in. The only way to actually trust output is to attach a verifiable confidence   
  signal to each response, not just hope the aggregate accuracy holds.                                                                                                                        
                                                                                                                                                                                            
  We built a local gateway that wraps every LLM output with a trust envelope: decision trace, risk score, and an explicit PASS/REFINE/ESCALATE/BLOCK classification. The point isn't to make 
  LLMs more accurate — it's to make their uncertainty legible so the human knows when to step in.

  Open source if you want to look at the architecture: github.com/base76-research-lab/operational-cognos

base7610 hours ago

"reliability between 99% and 100% is the worst because you still can't trust the output"

cantalopes16 hours ago

Interesting, is this just a fun competition or would this also have some practical applications i wonder?

anthkan hour ago

Now, under T3X and Lisp under 64k:

https://t3x.org/lisp64k/index.html

xyzsparetimexyz9 hours ago

The ai slop pixel art...

nextlevelwizard16 hours ago

Here: eval()

You are welcome

1over13720 hours ago

Now wrap it all in an Electron app!

cantalopes16 hours ago

And npm install llm-is-odd to divide and conquer!

computersuck19 hours ago

this is the dumbest fking thing to do math with

akoboldfrying15 hours ago

Yes, but it's interesting that you can teach it to do arithmetic, don't you think? Most things can't be taught to do arithmetic, making this "transformer" thing slightly magical. And so then it seems interesting to investigate exactly how much magic is needed to achieve this.

gaigalas15 hours ago

In theory, there is an infinite number of systems with simple emergent rules that can eventually be taught arithmetic.

qayxc12 hours ago

> Most things can't be taught to do arithmetic, making this "transformer" thing slightly magical.

Yep, for people who don't have know the fundamentals (i.e. maths). To people who don't know the universal approximation theorem, this may seem like "magic", but it's just as much magic as making a dark room bright by flipping a light switch.

MarcLore20 hours ago

The gap between 36 hand-coded params and 311 trained params is fascinating and honestly underappreciated. It mirrors something we see repeatedly in ML: gradient descent finds solutions in a fundamentally different region of parameter space than a human engineer would design.

When you hand-code the weights, you're essentially implementing a known algorithm (carry-propagation) directly into the network topology. But trained networks often discover distributed representations that spread the computation across more parameters in ways that are harder to interpret but more robust to input distribution shifts.

I'd be curious whether the 311-param trained model generalizes better to bases other than 10, or to addition with different digit counts than it was trained on. In my experience, the 'messier' learned solutions sometimes capture more structural regularity than the clean engineered ones, precisely because they aren't locked into a single algorithmic strategy.

aichen_dev12 hours ago

[dead]

MarcLore18 hours ago

[dead]

jaunt763220 hours ago

[dead]

utopiah15 hours ago

"it is tempting, if the only tool you have is a hammer, to treat everything as if it were a nail." https://en.wikipedia.org/wiki/Law_of_the_instrument

Seems the castle of cards isn't just high enough. /s

munro20 hours ago

>=99% accuracy wtf?!?

I was initially excited until i saw that, because it would reveal some sort of required local min capacity, and then further revelation that this was all vibe coded and no arXiv, makes me feel I should save my attn for another article.

Sophira19 hours ago

I get that this is technically interesting, for certain, but the sheer amount of energy and associated global warming risk needed to do something with >=99% accuracy that we've been able to do easily for decades with a guaranteed 100% accuracy seems to me to be wasteful to the extreme.

Lerc19 hours ago

What would be an acceptable amount of energy to spend on something that someone has done in a different manner before? Would you rather we stick with all of the current known ways to do things.

Does this boil down to a condemnation of all scientific endeavours if they use resources?

Would it change things if the people who did it enjoyed themselves? Would they have spent more energy playing a first person shooter to get the same degree of enjoyment?

How do you make the calculation of the worth of a human endeavour? Perhaps the greater question is why are you making a calculation of the worth of a human endeavour.

mcdeltat19 hours ago

Ok I don't really care either way but to play devil's advocate, what exactly is this specific challenge of adding numbers with a transformer model demonstrating/advancing? The pushpack from people, albeit a little aggressive, does have a grain of truth. We're demonstrating that a model which uses preexisting addition instructions can add numbers? I mean yeah you can do it with arbitrarily few parameters because you don't need a machine learning model at all. Not exactly groundbreaking so I reckon the debate is fair.

Now if you said this proof of addition opens up some other interesting avenue of research, sure.

Lerc19 hours ago

>what exactly is this specific challenge of adding numbers with a transformer model demonstrating/advancing?

Well for starters, it puts the lie to the argument that a transformer can only output examples it has seen before. Performing the calculation on examples that haven't been seen demonstrates generalisation of the principles and not regurgitation.

While this misconception persists in a large number of people, counterexamples can always serve a useful purpose.

mcdeltat16 hours ago

Are people usually claiming that it strictly cannot produce any output it hasn't seen before? I wouldn't agree, I mean clearly they are generating some form of new content. My argument would be that while they can learn to some extent, the power of their generalisation is still tragically weak, particularly in some domains.

qsera18 hours ago

>it puts the lie to the argument

But it does not, right? You can either show it something, or modify the parameters in a way that resemble the result of showing it something.

You can claim that the model didn't see the thing, but that would mean nothing, because you are making the same effect with parameter tweaks indirectly.

Lerc14 hours ago

That's a counterargument to a different thing.

Iteratively measuring loss is a way to reconstruct values. That's trivial to show for a single value If 5 gives you a loss of 2 and 9 gives you a loss of 2 then you know the missing value is 7.

A model with enough parameters can memorise the training set in a similar manner. Technically the model hasn't seen that data by direct input either, but the mechanism provides the means to determine the what the data was. In that respect it is reasonable to say the model has seen the data.

Performing well on examples not in the training set is doing something else.

Any attempt to characterise that as having been seen before negates any distinction between taking in data and reasoning about that data.

qsera14 hours ago

Yea, because "seeing" is also tweaking the parameters. Which this example is doing manually.

So I don't understand how any one can make the claim that the model as not seen it. Because the internal transformation is similar.

Lerc13 hours ago

You are going to have to be more specific, because that reads like nonsense.

By what mechanism do you propose the model observed the test set?

qsera13 hours ago

>By what mechanism do you propose the model observed the test set..

By explicitly setting the model parameters.

What happens when a model is trained? We tweak the model parameters by some feed back.

In both cases, you affect the model parameters. Only the method is different. So both are eqvialent to "model observing the test set".

Lerc9 hours ago

I still do no see any causal link from the test set. When was this observed, how and by whom?

Are you trying to say that the person who entered the parameters had access to the test set? I find it more likely that they encoded the generalising rule than observed every instance of its use.

qsera9 hours ago

>I find it more likely that they encoded the generalising rule..

Look, I am saying that during training the model ends up "learning" the generalising rule from training data, but here it was explicitly entered into it, with out any training.

userbinator17 hours ago

Because it's fun. Life is meant to be enjoyed.

Those who worry about an imaginary risk and live their lives in constant fear have turned into nothing more than machines enslaved by propaganda.

coolsunglasses19 hours ago

>Hacker News

not any more, eh?

mapontosevenths17 hours ago

> the sheer amount of energy and associated global warming risk

I think that's one very good reason to make them more efficient, and that's part of the point of contests like this one.

tovej13 hours ago

Making things more efficient in a market setting just means they're used more. Which means we eventually use more resources with efficient methods, not less.

nradov19 hours ago

Wait until you see the quantum computer that it takes to factor the integer 15.

thereisnospork19 hours ago

You need to recalibrate your sense of scale if you think that this is a geologically relevant usage of energy.

hn-front (c) 2024 voximity
source