Built a ~9M param LLM from scratch to understand how they actually work. Vanilla transformer, 60K synthetic conversations, ~130 lines of PyTorch. Trains in 5 min on a free Colab T4. The fish thinks the meaning of life is food.
Fork it and swap the personality for your own character.
thomasfl5 hours ago
Is there some documentation for this? The code is probably the simplest (Not So) Large Language Model implementation possible, but it is not straight forward to understand for developers not familiar with multi-head attention, ReLU FFN, LayerNorm and learned positional embeddings.
This projects shares similarities with Minix. Minix is still used at universities as an educational tool for teaching operating system design. Minix is the operating system that taught Linus Torvalds how to design (monolithic) operating systems. Similarly having students adding capabilities to GuppyLM is a good way to learn LLM design.
achenatx5 hours ago
give the code to an LLM and have a discussion about it.
dominotw3 hours ago
does this work? there is no more need for writing high level docs?
arcanemachiner3 hours ago
> does this work?
Absolutely. If you loaded this into an agentic coding harness with a decent model, I can practically guarantee it would be able to help you figure out what's going on.
> there is no more need for writing high level docs?
Absolutely not. That would be like exploring a cave without a flashlight, knowing that you could just feel your way around in the dark instead.
Code is not always self-documenting, and can often tell you how it was written, but not why.
stronglikedanan hour ago
> If you loaded this into an agentic coding harness with a decent model, I can practically guarantee it would be able to help you figure out what's going on.
My non-coder but technically savvy boss has been doing this lately to great success. It's nice because I spend less time on it since the model has taken my place for the most part.
sigmoid103 hours ago
There are so many blogs and tutorials about this stuff in particular, I wouldn't worry about it being outside the training data distribution for modern LLMs. If you have a scarce topic in some obscure language I'd be more careful when learning from LLMs.
bigmadshoe3 hours ago
LLMs can tell you what the code does but not why the developer chose to do it that way.
Also, large codebases are harder to understand. But projects like these are simple to discuss with an LLM.
stronglikedanan hour ago
> LLMs can tell you what the code does but not why the developer chose to do it that way.
Do LLMs not take comments into consideration? (Serious question - I'm just getting into this stuff)
dr_hooo4 minutes ago
They do (it's just text), if they are there...
fg1379 hours ago
How does this compare to Andrej Karpathy's microgpt (https://karpathy.github.io/2026/02/12/microgpt/) or minGPT (https://github.com/karpathy/minGPT)?
armanifiedop7 hours ago
I haven't compared it with anything yet. Thanks for the suggestion; I'll look into these.
BrokenCogs6 hours ago
Who cares how it compares, it's not a product it's a cool project
tantalor6 hours ago
Even cool projects can learn from others. Maybe they missed something that could benefit the project, or made some interesting technical choice that gives a different result.
For the readers/learners, it's useful to understand the differences so we know what details matter, and which are just stylistic choices.
This isn't art; it's science & engineering.
BrokenCogs5 hours ago
But it isn't the OP's responsibility to compare their project to all other projects. The GP could themselves perform the comparison and post their thoughts instead of asking an open ended question.
philipallstar5 hours ago
> it isn't the OP's responsibility to compare their project to all other projects
No one, including the GP, said it was.
fg1374 hours ago
It isn't, but such information will be immensely helpful to anyone who wants to learn from such projects. Some tutorials are objectively better than others, and learners can benefit from such information.
tantalor5 hours ago
100% agree, I didn't mean to imply that OP is responsible for that, or that the (lack of) comparison detracts in any way from the work.
stronglikedanan hour ago
> Who cares how it compares
Well, the person who asked the question, for one. I'm sure they're not the only one. Best not to assume why people are asking though, so you can save time by not writing irrelevant comments.
layer83 hours ago
Microgpt isn’t a product either. Are you saying that differences between cool projects aren’t worth thinking and conversing about?
[deleted]5 hours agocollapsed
totetsu10 hours ago
https://bbycroft.net/llm has 3d Visualization of tiny example LLM layers that do a very good job at showing what is going on (https://news.ycombinator.com/item?id=38505211)
armanifiedop7 hours ago
Pretty neat! I'll definitely take a deeper look into this.
maverickxone8 hours ago
have little to do with this, but i have to say your project are indeed pretty cool! Consider adding some more UI?
skramzy6 hours ago
Neat!
ordinarily16 hours ago
It's genuinely a great introduction to LLMs. I built my own awhile ago based off Milton's Paradise Lost: https://www.wvrk.org/works/milton
algoth17 hours ago
This really makes me think if it would be feasible to make an llm trained exclusively on toki pona (https://en.wikipedia.org/wiki/Toki_Pona)
MarkusQ4 hours ago
There isn't enough training data though, is there? The "secret sauce" of LLMs is the vast amount of training data available + the compute to process it all.
algoth12 hours ago
I think you could probably feed a copy of a toki pona grammar book to a big model, and have it produce ‘infinite’ training data
eden-u426 minutes ago
There are not enough samples in that book to generate new "infinite" data.
neurworlds6 hours ago
Cool project. I'm working on something where multiple LLM agents share a world and interact with each other autonomously. One thing that surprised me is how much the "world" matters — same model, same prompt, but put it in a system with resource constraints, other agents, and persistent memory, the behavior changes dramatically. Made me realize we spend too much time optimizing the model and not enough thinking about the environment it operates in.
mudkipdev13 hours ago
This is probably a consequence of the training data being fully lowercase:
You> hello Guppy> hi. did you bring micro pellets.
You> HELLO Guppy> i don't know what it means but it's mine.
functional_dev12 hours ago
Great find! It appears uppercase tokens are completely unknonw to the tokenizer.
But the character still comes through in response :)
hackerman7000011 hours ago
Finally an LLM that's honest about its world model. "The meaning of life is food" is arguably less wrong than what you get from models 10,000x larger
amelius8 hours ago
It's arguably even better than the most famous answer to that question.
siva77 hours ago
which is?
amelius6 hours ago
zkmon7 hours ago
Meaning/goal of life is to reproduce. Food (and everything else) is only a means to it. Reproduction is the only root goal given by nature to any life form. All resources and qualities are provided are only to help mating.
tantalor5 hours ago
Reproduction is the goal of genes.
Food (not dying) is the goal of organisms.
philote5 hours ago
I'd argue genes nor life has a "goal". They are what they are because they've been successful at continuing their existence. Would you say a rock's goal is not to get broken?
tantalor4 hours ago
Only because genes/organisms can make choices (changes to its programming, or decisions) to optimize their path towards their goal.
A rock is maybe not a good counterexample, but a crystal is because it can grow over time. So in some sense, it tries not to break. However a crystal cannot make any choices; it's behavior is locked into the chemistry it starts with.
amelius7 hours ago
Then why are reproductive rates so low in western countries?
https://en.wikipedia.org/wiki/List_of_countries_by_total_fer...
darepublic6 hours ago
The western lifestyle is an evolutionary dead end?
vixen996 hours ago
It seems that some in the West want it to be and are working hard to make it so.
hca4 hours ago
No, evolution has encoded lust. It has not yet allowed for condoms. But it's a process.
BiraIgnacioan hour ago
Nice work and thanks for sharing it!
Now, I ask, have LLMs ben demystified to you? :D
I am still impressed how much (for the most part) trivial statistics and a lot of compute can do.
bharat1010an hour ago
This is such a smart way to demystify LLMs. I really like that GuppyLM makes the whole pipeline feel approachable..great work
rpdaiml4 hours ago
This is a nice idea. A tiny implementation can be way more useful for learning than yet another wrapper around a big model, especially if it keeps the training loop and inference path small enough to read end to end.
zwaps13 hours ago
I like the idea, just that the examples are reproduced from the training data set.
How does it handle unknown queries?
armanifiedop7 hours ago
It mostly doesn't, at 9M it has very limited capacity. The whole idea of this project is to demonstrate how Language Models work.
[deleted]10 hours agocollapsed
bblb10 hours ago
Could it be possible to train LLM only through the chat messages without any other data or input?
If Guppy doesn't know regular expressions yet, could I teach it to it just by conversation? It's a fish so it wouldn't probably understand much about my blabbing, but would be interesting to give it a try.
Or is there some hard architectural limit in the current LLM's, that the training needs to be done offline and with fairly large training set.
tatrions5 hours ago
What happens during chat is just inference. The weights are frozen, and it generates tokens conditioned on the conversation so far. No learning happens. The "learning during conversation" effect you see in bigger models is in-context learning: the model uses the full chat history in its attention window, but nothing persists after the session ends.
At 9M params you won't get meaningful in-context learning either. That capability seems to emerge around 1B+ params, and it has more of a phase-transition quality than a smooth ramp. So unfortunately no, you can't teach Guppy regex by talking to it.
There is some research on "test-time training" where weights actually get updated during inference, but it's expensive and niche. Backprop costs roughly 3x the compute of a forward pass, so doing it live in a conversation is impractical for anything but tiny models.
roetlich8 hours ago
What does "done offline" mean? Otherwise you are limited by context window.
Leomuck4 hours ago
Wow that is such a cool idea! And honestly very much needed. LLMs seem to be this blackbox nobody understands. So I love every effort to make that whole thing less mysterious. I will definitely have a look at dabbling with this, may it not be a goldfish LLM :)
CaseFlatline4 hours ago
I am trying to find how the synthetic data was created (looking through the repo) and didn't find it. Maybe I am missing it - Would love to see the prompts and process on that aspect of the training data generation!
vunderba3 hours ago
It's here:
https://github.com/arman-bd/guppylm/blob/main/guppylm/genera...
Uses a sort of mad-libs templatized style to generate all the permutations.
EmilioOldenziel2 hours ago
Building it yourself is always the best test if you really understand how it works.
jzer0cool4 hours ago
Does this work by just training once with next token prediction? Want to understand better how it creates fluent sentences if anyone can provide insights.
cbdevidal16 hours ago
> you're my favorite big shape. my mouth are happy when you're here.
Laughed loudly :-D
vunderba15 hours ago
This is a direct output from the synthetic training data though - wonder if there is a bit of overfitting going on or it’s just a natural limitation of a much smaller model.
jbethune3 hours ago
Forked. Very cool. I appreciate the simplicity and documentation.
Duplicake8 hours ago
I love this! Seems like it can't understand uppercase letters though
armanifiedop7 hours ago
Uppercase letters were intentionally ignored.
kaipereira14 hours ago
This is so cool! I'd love to see a write-up on how made it, and what you referenced because designing neural networks always feel like a maze ;)
brcmthrowaway13 hours ago
Why are there so many dead comments from new accounts?
59nadir10 hours ago
Because despite what HN users seem to think, HN is a LLM-infested hellscape to the same degree as Reddit, if not more.
wiseowise9 hours ago
You’re absolutely right! HN isn’t just LLM-infested hellscape, it’s a completely new paradigm of machine assisted chocolate-infused information generation.
toyg8 hours ago
Just let me know which type of information goo you'd like me to generate, and I'll tailor the perfect one for you.
siva77 hours ago
But what should we do? The parent company isn't transparent about communicating the seriousness of this problem
loveparade12 hours ago
It really seems it's mostly AI comments on this. Maybe this topic is attractive to all the bots.
armanifiedop6 hours ago
[dead]
AlecSchueler13 hours ago
They all seem to be slop comments.
nobodyandproud5 hours ago
Thanks. Tinkering is how I learn and this is what I’ve been looking for.
ankitsanghi14 hours ago
Love it! I think it's important to understand how the tools we use (and will only increasingly use) work under the hood.
drincanngao8 hours ago
I was going to suggest implementing RoPE to fix the context limit, but realized that would make it anatomically incorrect.
armanifiedop7 hours ago
I intentionally removed all optimizations to keep it vanilla.
winter_blue3 hours ago
This is amazing work. Thank you.
fawabc8 hours ago
how did you generate the synthetic data?
amelius8 hours ago
> A 9M model can't conditionally follow instructions
How many parameters would you need for that?
armanifiedop7 hours ago
My initial idea was to train a navigation decision model with 25M parameters for a Raspberry Pi, which, in testing, was getting about 60% of tool calls correct. IMO, it seems like around 20M parameters would be a good size for following some narrow & basic language instructions.
amelius6 hours ago
Ok. This makes me wonder about a broader question. Is there a scientific approach showing a pyramid of cognitive functions, and how many parameters are (minimally) required for each layer in this pyramid?
SilentM6817 hours ago
Would have been funny if it were called "DORY" due to memory recall issues of the fish vs LLMs similar recall issues :)
armanifiedop7 hours ago
OMG! Why didn't I thought fo this first :P
kubrador13 hours ago
how's it handle longer context or does it start hallucinating after like 2 sentences? curious what the ceiling is before the 9M params
ben8bit10 hours ago
This is really great! I've been wanting to do something similar for a while.
gnarlouse15 hours ago
I... wow, you made an LLM that can actually tell jokes?
murkt12 hours ago
With 9M params it just repeats the joke from a training dataset.
NyxVox15 hours ago
Hm, I can actually try the training on my GPU. One of the things I want to try next. Maybe a bit more complex than a fish :)
rahen5 hours ago
I don't mean to be 'that guy', but after a quick review, this really feels like low-effort AI slop to me.
There is nothing wrong using AI tools to write code, but nothing here seems to have taken more than a generic 'write me a small LLM in PyTorch' prompt, or any specific human understanding.
The bar for what constitutes an engineering feat on HN seems to have shifted significantly.
rclkrtrzckr12 hours ago
I could fork it and create TrumpLM. Not a big leap, I suppose.
search_facility10 hours ago
probably 8M params are too much even :)
danparsonson9 hours ago
As long as you use the best parameters then it doesn't matter
wiseowise9 hours ago
Grab her by the pointer.
ananandreas9 hours ago
Great and simple way to bridge the gap between LLMs and users coming in to the field!
cpldcpu11 hours ago
Love it! Great idea for the dataset.
monksy12 hours ago
Is this a reference from the Bobiverse?
Vektorceraptor5 hours ago
Haha, funny name :)
nullbyte80817 hours ago
Adorable! Maybe a personality that speaks in emojis?
armanifiedop7 hours ago
OMG! You just gave me the next idea..
AndrewKemendo17 hours ago
I love these kinds of educational implementations.
I want to really praise the (unintentional?) nod to Nagel, by limiting capabilities to representation of a fish, the user is immediately able to understand the constraints. It can only talk like a fish cause it’s very simple
Especially compared to public models, thats a really simple correspondence to grok intuitively (small LLM > only as verbose as a fish, larger LLM > more verbose) so kudos to the author for making that simple and fun.
dvt17 hours ago
> the user is immediately able to understand the constraints
Nagel's point was quite literally the opposite[1] of this, though. We can't understand what it must "be like to be a bat" because their mental model is so fundamentally different than ours. So using all the human language tokens in the world can't get us to truly understand what it's like to be a bat, or a guppy, or whatever. In fact, Nagel's point is arguably even stronger: there's no possible mental mapping between the experience of a bat and the experience of a human.
[1] https://www.sas.upenn.edu/~cavitch/pdf-library/Nagel_Bat.pdf
Terr_13 hours ago
IMO we're a step before that: We don't even have a real fish involved, we have a character that is fictionally a fish.
In LLM-discussions, obviously-fictional characters can be useful for this, like if someone builds a "Chat with Count Dracula" app. To truly believe that a typical "AI" is some entity that "wants to be helpful" is just as mistaken as believing the same architecture creates an entity that "feels the dark thirst for the blood of the living."
Or, in this case, that it really enjoys food-pellets.
andoando12 hours ago
Id highly disagree with that. Were all living in the same shared universe, and underlying every intelligence must be precisely an understanding of events happening in this space-time.
vixen996 hours ago
What does 'precisely' mean? Everyone has the same understanding of events - a precise one?
andoando3 hours ago
No I am saying the basis of intelligence must be shared, not that we have the same exact mental model.
I might for example say a human entered a building, a bat might on the other hand think "some big block with two sticks moved through a hole", but both are experiencing a shared physical observation, and there is some mapping between the two.
Its like when people say, if there are aliens they would find the same mathematical constants thet we do
[deleted]14 hours agocollapsed
AndrewKemendo16 hours ago
Different argument
I’m not going to argue other than to say that you need to view the point from a third party perspective evaluating “fish” vs “more verbose thing,” such that the composition is the determinant of the complexity of interaction (which has a unique qualia per nagel)
Hence why it’s a “unintentional nod” not an instantiation
[deleted]14 hours agocollapsed
hughw6 hours ago
Tiny LLM is an oxymoron, just sayin.
uxcolumbo6 hours ago
How about: LLMs are on a spectrum and this one is on the tiny side?
armanifiedop6 hours ago
True, but most would ignore LM if it weren't LLM.
gdzie-jest-sol10 hours ago
* How creating dataset? I download it but it is commpresed in binary format.
* How training. In cloud or in my own dev
* How creating a gguf
freetonik10 hours ago
You sound like Guppy. Nice touch.
gdzie-jest-sol10 hours ago
``` uv run python -m guppylm chat
Traceback (most recent call last):
File "<frozen runpy>", line 198, in _run_module_as_main
File "<frozen runpy>", line 88, in _run_code
File "/home/user/gupik/guppylm/guppylm/__main__.py", line 48, in <module>
main()
File "/home/user/gupik/guppylm/guppylm/__main__.py", line 29, in main
engine = GuppyInference("checkpoints/best_model.pt", "data/tokenizer.json")
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/user/gupik/guppylm/guppylm/inference.py", line 17, in __init__
self.tokenizer = Tokenizer.from_file(tokenizer_path)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Exception: No such file or directory (os error 2)
```gdzie-jest-sol9 hours ago
meybe add training again (read best od fine) and train again
``` # after config device checkpoint_path = "checkpoints/best_model.pt"
ckpt = torch.load(checkpoint_path, map_location=device, weights_only=False)
model = GuppyLM(mc).to(device) if "model_state_dict" in ckpt: model.load_state_dict(ckpt["model_state_dict"]) else: model.load_state_dict(ckpt)
start_step = ckpt.get("step", 0) print(f"Encore {start_step}") ```
oyebenny14 hours ago
Neat!
Elengal9 hours ago
Cool
[deleted]16 hours agocollapsed
meidad_g3 hours ago
[dead]
techpulselab3 hours ago
[dead]
zephyrwhimsyan hour ago
[dead]
maxothex3 hours ago
[dead]
textai20265 hours ago
[dead]
adamsilvacons5 hours ago
[dead]
solsafe_dev4 hours ago
[dead]
agdexai5 hours ago
[dead]
Morpheus_Matrix17 hours ago
[flagged]
ethanmacavoy15 hours ago
[flagged]
Alexzoofficial13 hours ago
[flagged]
peifeng0711 hours ago
[dead]
agenexus15 hours ago
[flagged]
zephyrwhimsy7 hours ago
[dead]
zephyrwhimsy11 hours ago
[dead]
novachen7 hours ago
[dead]
solsafe_dev4 hours ago
[dead]
techpulselab11 hours ago
[dead]
Morpheus_Matrix10 hours ago
[dead]
weiyong102416 hours ago
[flagged]
aesopturtle14 hours ago
[flagged]
george_belsky11 hours ago
[dead]
aditya730301114 hours ago
Did something similar last year https://github.com/aditya699/EduMOE
zhichuanxun7 hours ago
[dead]
aditya730301114 hours ago
[dead]
LeonTing101015 hours ago
[flagged]
secabeen15 hours ago
Training data is here:
https://huggingface.co/datasets/arman-bd/guppylm-60k-generic
areys7 hours ago
[flagged]
moonu6 hours ago
This comment seems ai-written
jiusanzhou12 hours ago
[flagged]
ngruhn12 hours ago
comment smells AI written
3m12 hours ago
AI account
dinkumthinkum14 hours ago
I think this is a nice project because it is end to end and serves its goal well. Good job! It's a good example how someone might do something similar for a specific purpose. There are other visualizers that explain different aspects of LLMs but this is a good applied example.
martmulx15 hours ago
How much training data did you end up needing for the fish personality to feel coherent? Curious what the minimum viable dataset looks like for something like this.
Propelloni10 hours ago
Great work! I still think that [1] does a better job of helping us understand how GPT and LLM work, but yours is funnier.
Then, some criticism. I probably don't get it, but I think the HN headline does your project a disservice. Your project does not demystify anything (see below) and it diverges from your project's claim, too. Furthermore, I think you claim too much on your github. "This project exists to show that training your own language model is not magic." and then just posts a few command line statements to execute. Yeah, running a mail server is not magic, just apt-get install exim4. So, code. Looking at train_guppylm.ipynb and, oh, it's PyTorch again. I'm better off reading [2] if I'm looking into that (I know, it is a published book, but I maintain my point).
So, in short, it does not help the initiated or the uninitiated. For the initiated it needs more detail for it to be useful, the uninitiated more context for it to be understood. Still a fun project, even if oversold.
[1] https://spreadsheets-are-all-you-need.ai/ [2] https://github.com/rasbt/LLMs-from-scratch
jadengeller8 hours ago
this comment seems to be astroturfing to sell a course