Hacker News

teamchong
Show HN: Prompt-to-Excalidraw demo with Gemma 4 E2B in the browser (3.1GB) teamchong.github.io

locusofself11 days ago

I've had excellent luck using Claude Code to generate "mermaid diagrams" for me, and convert them to .png format headlessly using mmdc/puppeteer. Really helped me out with an engineering proposal I just finished. In past years I would have fumbled around with Visio forever and the result would have been worse.

ascorbic10 days ago

I find Mermaid diagram rendering is quite ugly by default. I've gotten much better-looking results by asking it to just generate SVGs. As a bonus it can do animations too. e.g. see slide 3 here, which I first tried with Mermaid and then switched to SVG when I couldn't get the rendering to look good: https://talks.mk.gg/2026/atmosphereconf/

d4rkp4ttern10 days ago

I now often have CC make technical/architecture diagrams with tikz, the results look much better than mermaid but still requires multiple iterations to fix bad arrows, bad layouts etc.

Diagrams are still far from solved. We need a good non-gameable diagrams benchmark.

0xchamin10 days ago

I use Claude Code and Gemini and to LLM as a judge among the two to review each others result and generate a final mermiad diagram.

system_operator11 days ago

do the same.

I just ask Claude code for mermaid to visualize any topic I'm discussing.

lxgr10 days ago

In a pinch, Claude is also quite good at ASCII art in my experience!

alfiedotwtf10 days ago

Same here!

thawab10 days ago

You can copy past the mermaid to excalidraw to visualize it.

halJordan11 days ago

And yet people here insist that the height of an llm is not being above to draw a pelican or count letters in a word

wongarsu11 days ago

Well, mermaid diagrams are "just" a list of nodes and their relations. You'd expect any llm capable of generating code to be able to generate them

Writing an SVG of a pelican riding a bicycle without being able to see the result and iterate based on that is incredibly difficult by comparison. I'm sure some humans could do it, but I sure can't. That's part of the beauty of it: it's very difficult to do but a toddler could judge the results

Writing an SVG of a diagram by hand would be somewhere on the middle ground. Or depending on the number of nodes might be even harder than the pelican. Layouting diagrams can get tricky very quickly. It's also one of Mermaid's biggest weaknesses

christkv11 days ago

Just wait if they go public. Claude 5.4 fails the Pelican test stock sheds 20% of value pf news. Wall street wonders if the lack of front wheel means there is something seriously wrong with the stocks underlying value

chermi9 days ago

Could you share your setup + workflow? How do you get it to not give shit layouts?

userbinator11 days ago

Am I correct in interpreting the title to mean that visiting the page will result in a 3.1GB download?

avrionov11 days ago

Yes. I tested it. It downloaded 3.1GB

userbinator10 days ago

Given what the page does, that's not a surprising amount, but consider it would take over 5 days to download on a 56k dialup connection, and even at 100Mbps that's over 4 minutes.

logicallee11 days ago

I love this idea. Unfortunately, it says "Unsupported browser/GPU" for me. This is Desktop Chrome version 147 (page says it requires 134+) and I have a 1060 card with 6 GB of RAM on this specific device, so it should fit. I have more than 4 GB of free RAM as well.

teamchongop11 days ago

sorry it’s not working for you. I built this as a personal project for self-learning, but I plan to take a look at this issue next weekend. you can check out a video demo of it here: https://github.com/user-attachments/assets/71ae6e5c-a5ec-4d0...

logicallee11 days ago

That's amazing. Very good result. Thanks for sharing.

OsamaJaber11 days ago

Small models in the browser are a different optimization problem than small models on a server. On server you chase throughput so you batch. In browser you're stuck at batch size 1, which means kernel launch overhead and memory bandwidth dominate, not FLOPs

walthamstow11 days ago

The Gemma models really are amazing. I was on a flight a few days ago and used E2B to do some basic research on the place I was going to, running the model locally on my Pixel 10 Pro. It gave me basically the same as Gemini or ChatGPT would do when I landed

tredre311 days ago

> It gave me basically the same as Gemini or ChatGPT would do

I'm very surprised by this, in my tests E2B has very limited general knowledge.

walthamstow11 days ago

It was genuinely outstanding for a tiny local model. I'm in Marrakech and it gave 3 separate one-day itineraries that contained most of the same stuff I got from Gemini when I landed. I followed up to ask specifically about souks (markers/bazaars) and it listed the main ones and what types of products you can get from each.

avadodin11 days ago

I haven't tried E2B but E4B isn't particularly better than the old Gemma3 4B model(which was already very good at multilingual and decent at other tasks) but the voice recognition is a nice addition.

maxlegav10 days ago

Do you see a real difference with AI agents working together

alwyn11 days ago

May I ask what setup you used to run it on your phone and if it's satisfactory (it sounds like it)?

walthamstow11 days ago

Edge Gallery, which is a bloody terrible name for an app by the way

billyp-rva11 days ago

> "OAuth 2.0 authorization code flow with PKCE as a sequence diagram — user, browser, app server, auth server, API"

If you do a Google image search for "OAuth 2.0 PKCE sequence diagram" you get good results also. Maybe if you ask for something more esoteric this becomes valuable? Of course, that also makes hallucinations more likely.

Sathwickp11 days ago

just tried it out, must say it's amazing the speed at which it generates these diagrams Is this opensource by any chance? Would love to take a look at the code and understand how it works

Mystery-Machine10 days ago

Please never do this again. It's insane that the 3.1GB download kicks off as soon as you open the page.

rahimnathwani11 days ago

How does this part work?

"The LLM outputs compact code (~50 tokens) instead of raw Excalidraw JSON (~5,000 tokens)."

I see on the left that the LLM is outputting some instructions to add nodes and edges to the diagram. But what is interpreting those commands and turning them into an Excalidraw file?

evrydayhustling11 days ago

had the same question! looks like it's another project called Drawmode[1] from the same group...

[1] https://github.com/teamchong/drawmode

0xchamin10 days ago

very interesting. Prompt to "generate System Design like ChatGPT end to end". Took about ~3-5 mins the model to download. It generated a reasonably good system design.

classified10 days ago

Bloat will expand to fill all available space.

wesleynepo11 days ago

Really interesting, I wish I could understand the under the hood better but I guess I don't have all the background needed.

vismit200010 days ago

xnx11 days ago

It seems like Gemma should replace Gemini Nano as the AI built into Chrome.

agent3711 days ago

Very cool. Did you happen to try other models like Qwen and was there a difference as opposed to Gemma ?

hhthrowaway123011 days ago

so multiple of these browser wasm demos make me re-download the models, can someone make a cdn for it or some sort u uberfast downloader? just throw some claude credits against it ty!

logicallee11 days ago

>can someone make a cdn for it or some sort u uberfast downloader? just throw some claude credits against it ty!

Okay, I did so. I realize that in your later followup comment you might want something different (like for Chrome itself to cache these downloads or something) but for now I made what you asked for, here you go:

https://stateofutopia.com/experiments/ephemeralcdn/

It's an ultrafast temporary CDN for one-off experiments like this. Should be lightning fast. By including the script, you can include any file this CDN serves.

hhthrowaway123010 days ago

haha this is awesome! this is fantastic.

wereHamster11 days ago

CDN wouldn't help much. These days browsers partition caches by origin, so if two different tools (running on different domains) fetch the same model from the CDN, the browser would download it twice.

cjbgkagh11 days ago

Did not know that. That sounds extraordinary wasteful, there must be a file hash based method that would allow sharing such files between domains.

faangguyindia11 days ago

It offers security.

Just like you wouldn't use same table in your system for all users in a multi tenant application.

cjbgkagh11 days ago

If the file is hashed strongly enough then it can be no other file. I can see how information on previous sites visited can be leaked and how this could be bad but I think whitelisting by end users could still allow some files to be used. E.g. the code for react.

stavros10 days ago

The fact that you don't see it doesn't mean it doesn't exist. I make up a unique file, put it on site X and ask your browser to cache it. I try to load the same file on site Y and time how long it takes. If it's instant, site Y knows you visited site X.

Tadaaa! Tracking.

cjbgkagh10 days ago

I said I ‘can see’ I already understand that. Hence the whitelisting on files that are not unique / created for this purpose.

stavros10 days ago

Ah, my bad, sorry.

thornewolf11 days ago

it's a security feature. otherwise my malicious site could check for cdn.sensitivephotoswebsite.com and blackmail you if it was cached already

cjbgkagh11 days ago

It would be nice if there was a whitelist option for non-sensitive content. I stopped using cdn links due to the overhead of the extra domain lookups but I did think that my self hosted content would be cached across domains.

onion2k10 days ago

It would be nice if there was a whitelist option for non-sensitive content.

There's no such thing as non-sensitive content from a CDN though. Scripts are obviously sensitive, styles can be used to exfiltrate data through background-url directives, and anything like images has no benefit being cached across sites.

Fonts might be one exception, but I bet those are exploitable somehow.

pyrolistical11 days ago

Seem like a solvable problem. Per origin cache control. But actually just load the data locally

embedding-shape11 days ago

Adding a file input where users can upload files to the frontend directly from their file manager would probably work as a stop-gap measure, for the ones who want something quick that let people manage their own "cache" of model files.

logicallee11 days ago

Would you be okay with it using your upload at the same time, then a p2p model would work. (This is potentially a good match for p2p because edge connections are very fast, they don't have to go across the whole Internet). You could be downloading from uploaders in your region. Let me know if you would be okay with uploading at the same time, then this model works and I can build it for you for people to use this way.

Rekindle809011 days ago

What? downloaded for me at 2gbps

hhthrowaway123011 days ago

Ah let me clarify, many of the in the browser demos make me download certain models even if I already have them It would be great if there was a way that I don't have to redownload them across demos so that I just have a cache. or an in browser model manager. hope this makes sense.

Or indeed use some sort of huggingface model downloader (if that exist with XET)

varun_ch11 days ago

I think this would sit best at the browser level. I’m not sure there’s a nice way for multiple websites to share a cache like that.

hhthrowaway123011 days ago

also maybe a good usecase to finally have P2P web torrents :)

hhthrowaway123011 days ago

Yeah that's great but I'm in a cafe outside burning my phone data. ty!

busssard11 days ago

no firefox? sad :(

COOLmanYT11 days ago

no firefox support?

teamchongop11 days ago

firefox has webgpu already, but the subgroups extension isn't in yet. every matmul / softmax kernel here leans on subgroupShuffleXor for reductions, that's the blocker. same reason mlc webllm and friends don't run on firefox either. once mozilla ships it this should work

moralestapia10 days ago

I, for one, welcome this new trend of Gigabyte-sized websites.

zhangchen11 days ago

[dead]

nigardev11 days ago

[flagged]

hn-front (c) 2024 voximity
source