Hacker News

GavinAnderegg
Nvidia DGX Spark: great hardware, early days for the ecosystem simonwillison.net

simonw7 hours ago

It's notable how much easier it is to get things working now that the embargo has lifted and other projects have shared their integrations.

I'm running VLLM on it now and it was as simple as:

  docker run --gpus all -it --rm \
    --ipc=host --ulimit memlock=-1 \
    --ulimit stack=67108864 \
    nvcr.io/nvidia/vllm:25.09-py3
(That recipe from https://catalog.ngc.nvidia.com/orgs/nvidia/containers/vllm?v... )

And then in the Docker container:

  vllm serve &
  vllm chat
The default model it loads is Qwen/Qwen3-0.6B, which is tiny and fast to load.

3abiton2 hours ago

As someone who hot on early on the Ryzen AI 395+, are there any added value for the DGX Spark beside having cuda (compared to ROCm/vulkan)? I feel Nvidia fumbled the marketing, either making it sound like an inference miracle, or a dev toolkit (then again not enough to differentiate it from the superior AGX Thor).

I am curious about where you find its main value, and how would it fit within your tooling, and use cases compared to other hardware?

From the inference benchmarks I've seen, a M3 Ultra always come on top.

justinclift2 hours ago

It's very likely worth trying ComfyUI on it too: https://github.com/comfyanonymous/ComfyUI

Installation instructions: https://github.com/comfyanonymous/ComfyUI#nvidia

It's a webUI that'll let you try a bunch of different, super powerful things, including easily doing image and video generation in lots of different ways.

It was really useful to me when benching stuff at work on various gear. ie L4 vs A40 vs H100 vs 5th gen EPYC cpus, etc.

behnamoh5 hours ago

I'm curious, does its architecture support all CUDA features out of the box or is it limited compared to 5090/6000 Blackwell?

rcarmo5 hours ago

About what I expected. The Jetson series had the same issues, mostly, at a smaller scale: Deviate from the anointed versions of YOLO, and nothing runs without a lot of hacking. Being beholden to CUDA is both a blessing and a curse, but what I really fear is how long it will take for this to become an unsupported golden brick.

Also, the other reviews I’ve seen point out that inference speed is slower than a 5090 (or on par with a 4090 with some tailwind), so the big difference here (other than core counts) is the large chunk of “unified” memory. Still seems like a tricky investment in an age where a Mac will outlive everything else you care to put on a desk and AMD has semi-viable APUs with equivalent memory architectures (even if RoCm is… well… not all there yet).

Curious to compare this with cloud-based GPU costs, or (if you really want on-prem and fully private) the returns from a more conventional rig.

3abiton2 hours ago

> Also, the other reviews I’ve seen point out that inference speed is slower than a 5090 (or on par with a 4090 with some tailwind), so the big difference here (other than core counts) is the large chunk of “unified” memory.

It's not comparable to 4090 inference speed. It's significantly slower, because of the lack of MXFP4 models out there. Even compared to Ryzen AI 395 (ROCm / Vulkan), on gpt-oss-120B mxfp4, somehow DGX manages to lose on token generation (pp is faster though.

> Still seems like a tricky investment in an age where a Mac will outlive everything else you care to put on a desk and AMD has semi-viable APUs with equivalent memory architectures (even if RoCm is… well… not all there yet).

ROCm (v7) for APUs came a long way actually, mostly thanks to the community effort, it's quite competitive and more mature. It's still not totally user friendly, but it doesn't break between updates (I know the bar is low, but that was the status a year ago). So in comparison, the strix halo offers lots of value for your money if you need a cheap compact inference box.

Havn't tested finetuning / training yet, but in theory it's supported, not to forget that APU is extremely performany for "normal" tasks (threadripper level) compared to the CPU of the DGX Spark.

rcarmo17 minutes ago

Yeah, good point on the FP4. I'm seeing people complain about INT8 as well, which ought to "just work", but everyone who has one (not many) is wary of wandering off the happy path.

EnPissant4 hours ago

This thing is dramatically slower than a 4090 both in prefill and decode. And I do mean DRAMATICALLY.

I have no immediate numbers for prefill, but the memory bandwidth is ~4x greater on a 4090 which will lead to ~4x faster decode.

KeplerBoy4 hours ago

This is kind of an embedded 5070 with a massive amount of relatively slow memory, don't expect miracles.

TiredOfLife2 hours ago

No need to put unified in scare quotes.

physicsguy3 hours ago

A few years ago I worked on an ARM supercomputer, as well as a POWER9 one. x86 is so assumed for anything other than trivial things that it is painful.

What I found was a good solution was using Spack: https://spack.io/ That allows you to download/build the full toolchain of stuff you need for whatever architecture you are on - all dependencies, compilers (GCC, CUDA, MPI, etc.), compiled Python packages, etc. and if you need to add a new recipe for something it is really easy.

For the fellow Brits - you can tell this was named by Americans!!!

donw3 hours ago

Who says we don’t have a sense of humor.

physicsguy2 hours ago

It's that it's an offensive term here, not a funny one.

MomsAVoxell2 hours ago

Aussie checking in, smokos over, get back to work...

theowaway3 hours ago

[dead]

smallnamespace4 hours ago

An 14-inch M4 Max Macbook Pro with 128GB of RAM has a list price of $4700 or so and twice the memory bandwidth.

For inference decode the bandwidth is the main limitation so if running LLMs is your use case you should probably get a Mac instead.

dialogbox4 hours ago

Why Macbook Pro? Isn't Mac Studio is a lot cheaper and the right one to compare with DGX Spark?

AndroTux3 hours ago

I think the idea is that instead of spending an additional $4000 on external hardware, you can just buy one thing (your main work machine) and call it a day. Also, the Mac Studio isn’t that much cheaper at that price point.

dialogboxan hour ago

> Also, the Mac Studio isn’t that much cheaper at that price point.

In the list price, it's 1000 USD cheaper. 3,699 vs 4,699 I know a lot can be relative but that's a lot for me for sure.

MomsAVoxell2 hours ago

Being able to leave the thing at home and access it anywhere is a feature, not a bug.

The Mac Studio is a more appropriate comparison. There is not yet a DGX laptop, though.

ChocolateGod3 hours ago

People may prefer running in environments that match their target production environment, so macOS is out of the question.

deviation2 hours ago

It's a hoop to jump through, but I'd recommend checking out Apple's container/containerization services which help accomplish just that.

https://github.com/apple/containerization/

bradfa2 hours ago

The Ubuntu that NVIDIA ship is not stock. They seem to be moving towards using stock Ubuntu but it’s not there yet.

Running some other distro on this device is likely to require quite some effort.

two_handfuls7 hours ago

I wonder how this compares financially with renting something on the cloud.

speedgoose2 hours ago

Depending on the kind of project and data agreements, it’s sometimes much easier to run computations on premise than in the cloud. Even though the cloud is somewhat more secure.

I for example have some healthcare research projects with personally identifiable data, and in these times it’s simpler for the users to trust my company, than my company and some overseas company and it’s associated government.

killingtime743 hours ago

For me as an employee in Australia, I could buy this and write it off my tax as a work expense myself. To rent, it would be much more cumbersome, involving the company. That's 45% off (our top marginal tax rate).

Grimburger3 hours ago

> That's 45% off (our top marginal tax rate)

Can people please not listen to this terrible advice that gets repeated so oft, especially in Australian IT circles somehow by young naive folks.

You really need to talk to your accountant here.

It's probably under 25% in deduction at double the median wage, little bit over @ triple, and that's *only* if you are using the device entirely for work, as in it sits in an office and nowhere else, if you are using it personally you open yourself up to all sorts of drama if and when the ATO ever decides to audit you for making a $6k AUD claim for a computing device beyond what you normally to use to do your job.

killingtime742 hours ago

My work is entirely from home. I happen to also be an ex lawyer, quite familiar with deduction rules and not altogether young. Can you explain why you think it's not 45% off? Ive deducted thousands in AI related work expenses over the years.

Even if what you are saying is correct, the discount is just lower. This is compared to no discount on compute/GPU rental unless your company purchases it.

lukeh2 hours ago

Also, you can only deduct it in a single financial year if you are eligible for the Instant asset write-off program.

I'm sure I'll get downvoted for this, but this common misunderstanding about tax deductions does remind me of a certain Seinfeld episode :)

Kramer: It's just a write off for them

Jerry: How is it a write off?

Kramer: They just write it off

Jerry: Write it off what?

Kramer: Jerry all these big companies they write off everything

Jerry: You don't even know what a write off is

Kramer: Do you?

Jerry: No. I don't

Kramer: But they do and they are the ones writing it off

killingtime742 hours ago

Correct. You can deduct over multiple years, so you do get the same amount back.

_joel3 hours ago

How would this fare alongside the new Ryzen chips, ooi? From memory is seems to be getting the same amount of tok/s but would the Ryzen box be more useful for other computing, not just AI?

justincormack2 hours ago

From reading reviews, dont have either yet: the nvidia actually has unified memory, AMD you have to specify the allocation split. Nvidia maybe has some form of gpu partitioning so you can run multiple smaller models but no one got it working yet. The Ryzen is very different from the pro gpus and the software support wont benefit from work done there, while nvidia is same. You can play games on Ryzen.

blurbleblurblean hour ago

But on the ryzen the vram allocation can be entirely dynamically allocated. I saw a review showing excellent full GPU usage during inference with the bios vram allocation set to the minimum level, using a very large model. So it's not so simple as you describe (I used to think this was the case too).

Beyond that, seems like the 395 in practice smashes the dgx spark in inference speeds for most models. I haven't seen nvfp4 comparisons yet and would be very interested to.

KeplerBoy3 hours ago

If you need x86 or windows for anything it's not even a question.

_joel2 hours ago

Sure, Mac's are also arm based, my question was about general performance, not architecture

jhcuii5 hours ago

Despite the large video memory capacity, its video memory bandwidth is very low. I guess the model's decode speed will be very slow. Of course, this design is very well suited for the inference needs of MoE models.

reenorap6 hours ago

Is 128 GB of unified memory enough? I've found that the smaller models are great as a toy but useless for anything realistic. Will 128 GB hold any model that you can do actual work with or query for answers that returns useful information?

simonw5 hours ago

There are several 70B+ models that are genuinely useful these days.

I'm looking forward to GLM 4.6 Air - I expect that one should be pretty excellent, based on experiments with a quantized version of its predecessor on my Mac. https://simonwillison.net/2025/Jul/29/space-invaders/

magicalhippo4 hours ago

Depending on you use-case, I've been quite impressed with GPT-OSS 20B with high reasoning effort.

The 120B model is better but too slow since I only have 16GB VRAM. That model runs decent[1] on the Spark.

[1]: https://news.ycombinator.com/item?id=45576737

cocogoatmain4 hours ago

128gb unified memory is enough for pretty good models, but honestly for the price of this it is better just go go with a few 3090s or a Mac due to memory bandwidth limitations of this card

behnamoh5 hours ago

the question is: how does the prompt processing time on this compare to M3 Ultra because that one sucks at RAG even though it can technically handle huge models and long contexts...

zozbot2343 hours ago

Prompt processing time on Apple Silicon might benefit from making use of the NPU/Apple Neural Engine. (Note, the NPU is bad if you're limited by memory bandwidth, but prompt processing is compute limited.) Just needs someone to do the work.

fnordpiglet7 hours ago

This seems to be missing the obligatory pelican on a bicycle.

simonw6 hours ago

Here's one I made with it - I didn't include it in the blog post because I had so many experiments running that I lost track of which model I'd used to create it! https://tools.simonwillison.net/svg-render#%3Csvg%20width%3D...

fnordpiglet6 hours ago

That seat post looks fairly unpleasant.

justincliftan hour ago

Looks like the poor pelican was crucified?!?! ;)

amelius2 hours ago

> x86 architecture for the rest of the machine.

Can anyone explain this? Does this machine have multiple CPU architectures?

catwell2 hours ago

No, he means most NVIDIA-related software assumes a x86 CPU whereas this one is ARM.

amelius2 hours ago

> most NVIDIA-related software assumes a x86 CPU

Is that true? nvidia Jetson is quite mature now, and runs on ARM.

saagarjha5 hours ago

I’m kind of surprised at the issues everyone is having with the arm64 hardware. PyTorch has been building official wheels for several months already as people get on GH200s. Has the rest of the ecosystem not kept up?

[deleted]4 hours agocollapsed

fisian6 hours ago

The reported 119GB vs. 128GB according to spec is because 128GB (1e9 bytes) equals 119GiB (2^30 bytes).

wmf6 hours ago

That can't be right because RAM has always been reported in binary units. Only storage and networking use lame decimal units.

simonw6 hours ago

Looks like Claude reported it based on this:

  ● Bash(free -h)
    ⎿                 total        used        free      shared  buff/cache   available
       Mem:           119Gi       7.5Gi       100Gi        17Mi        12Gi       112Gi
       Swap:             0B          0B          0B
That 119Gi is indeed gibibytes, and 119Gi in GB is 128GB.

[deleted]5 hours agocollapsed

[deleted]6 hours agocollapsed

simonw6 hours ago

Ugh, that one gets me every time!

matt32107 hours ago

> even in a Docker container

I should be allowed to do stupid things when I want. Give me an override!

simonw6 hours ago

A couple of people have since tipped me off that this works around that:

  IS_SANDBOX=0 claude --dangerously-skip-permissions
You can run that as root and Claude won't complain.

fulafelan hour ago

If you want to run stuff in Docker as root, better enable uid remapping, since otherwise the in-container uid 0 is still the real uid 0 and weakens the security boundary of the containerization.

(Because Docker doesn't do this as by default, best practice is to create a non root user in your dockerfile and run as that)

B1FF_PSUVM2 hours ago

I went looking for pictures (in the photo the box looked like a tray to me ...) and found an interesting piece by Canonical touting their Ubuntu base for the OS: https://canonical.com/blog/nvidia-dgx-spark-ubuntu-base

P.S. exploded view from the horse's mouth: https://www.nvidia.com/pt-br/products/workstations/dgx-spark...

monster_truck6 hours ago

Whole thing feels like a paper launch being held up by people looking for blog traffic missing the point.

I'd be pissed if I paid this much for hardware and the performance was this lacklustre while also being kneecapped for training

_ache_an hour ago

What do you mean by "kneecapped for training"? Isn't it 128GB of VRAM enougth for small model training, that a current GC can't do?

Obviously, even with connectx, it's only 240Gi of VRAM, so no big models can be trained.

rubatuga5 hours ago

When the networking is 25GB/s and the memory bandwidth is 210GB/s you know something is seriously wrong.

TiredOfLife2 hours ago

It has connectx 200GB/s

rgovostes6 hours ago

I'm hopeful this makes Nvidia take aarch64 seriously for Jetson development. For the past several years Mac-based developers have had to run the flashing tools in unsupported ways, in virtual machines with strange QEMU options.

ur-whale8 hours ago

As is usual for NVidia: great hardware, an effing nightmare figuring out how to setup the pile of crap they call software.

kanwisher7 hours ago

If you think their software is bad try using any other vendor , makes nvidia looks amazing. Apple is only one close

enoch20907 hours ago

Although a bit off the GPU topic, I think Apple's Rosetta is the smoothest binary transition I've ever used.

stefan_4 hours ago

Keep in mind this is part of Nvidias embedded offerings. So you will get one release of software ever, and that's gonna be pretty much it for the lifetime of the product.

triwats3 hours ago

Fascinating to me managing some of these systems just how bad the software is.

Management becomes layers upon layers of bash scripts which ends up calling a final batch script written by Mellanox.

They'll catch up soon, but you end up having to stay strictly on their release cycle always.

Lots of effort.

p_l8 hours ago

And yet CUDA has looked way better than ATi/AMD offerings in the same area despite ATi/AMD technically being first to deliver GPGPU (major difference is that CUDA arrived year later but supported everything from G80 up, and nicely evolved, while AMD managed to have multiple platforms with patchy support and total rewrites in between)

cylemons5 hours ago

What was the AMD GPGPU called?

p_l4 hours ago

Which one? We first had the flurry of third party work (Brook, Lib Sh, etc), then we had AMD "Close to Metal" which was IIRC based on Brook, soon followed with dedicated cards, year later we got CUDA (also derived partially from Brook!) and AMD Stream SDK, later renamed APP SDK. Then we got HIP / HSA stuff which unfortunately has its biggest legacy (outside of availability of HIP as way to target ROCm and CUDA simultaneously) in low level details of how GPU game programming evolved on Xbox360 / PS4 / XBox One / PS5. Somewhere in between AMD seemed to bet on OpenCL, yet today with latest drivers from both AMD and nVidia I get more OpenCL features on nVidia.

And of course there's the part of totally random and inconsistent support outside of the few dedicated cards, which is honestly why CUDA the de facto standard everyone measures against - you could run CUDA applications, if slowly, even on the lowest end nvidia cards, like Quadro NVS series (think lowest end GeForce chip but often paired with more displays and different support that focused on business users that didn't need fast 3D). And you still can, generally, run core CUDA code within last few generations on everything from smallest mobile chip to biggest datacenter behemoth.

pjmlp6 hours ago

Try to use Intel or AMD stuff instead.

jasonjmcghee7 hours ago

Except the performance people are seeing is way below expectations. It seems to be slower than an M4. Which kind of defeats the purpose. It was advertised as 1 Petaflop on your desk.

But maybe this will change? Software issues somehow?

It also runs CUDA, which is useful

airstrike7 hours ago

it fits bigger models and you can stack them.

plus apparently some of the early benchmarks were made with ollama and should be disregarded

ChrisArchitect9 hours ago

rvzan hour ago

TLDR: Just buy a RTX 5090.

The DGX Spark is completely overpriced for its performance compared to a single RTX 5090.

_ache_an hour ago

I get the idea. But isn't 128G of "VRAM" (unified actually) could train a usefull ViT model ?

I don't think the 5090 could do that with only 32G of VRAM, couldn't it ?

hn-front (c) 2024 voximity
source