Hacker News

indigodaddy
DeepSeek V4–almost on the frontier, a fraction of the price simonwillison.net

wg09 hours ago

Deepseek v4 Pro feels like Claude Opus 4.6 in it's personality but here's what I did find out about costs:

I did cut loose Deepseek v4 on a decent sized Typescript codebase and asked it to only focus on a single endpoint and go in depth on it layer by layer (API, DTOs, service, database models) and form a complete picture of types involved and introduced and ensure no adhoc types are being introduced.

It developed a very brief but very to the point summary of types being introduced and which of them were refunded etc.

Then I asked it to simplify it all.

It obviously went through lots of files in both prompts but total cost? Just $0.09 for the Pro version.

On Claude Opus I think (from past experience before price hikes) these two prompts alone would have burned somewhere between $9 to $13 easily with not much benefit.

Note - I didn't use Open router rather used the Deepseek API directly because Open router itself was being rate limited by Deep seek.

ithkuil2 hours ago

Even taking into account the fact that they are billing at 75% discount it's still quite cheaper

ameliusan hour ago

Aren't they all billing at discount?

stavrosan hour ago

Anthropic's and OpenAI's costs seem to include a fairly ok margin, from the very fourth hand info I have.

stavrosan hour ago

How did you use it? OpenRouter, or provider directly?

baldai2 hours ago

Only similarity it has to Opus 4.6 is the 4 in the name. I do not understand these dishonest comparisons. OOS models are vool, cheap and promising for a future -- but why are we pretending they are better than they are?

gmercan hour ago

Speak for yourself. I found switching from Opus 4.7 to be completely painless and in fact, due to the reliability of Anthropic’s API, less of a friction despite slower response times. Zero issues on a large mono repro

deaux4 hours ago

I'm surprised that people here don't care at all about these models openly training on your data, especially if you use them straight from the model developer. Whereas things like "GitHub now automatically opts everyone into using their code for model training" get hundreds of justifiably angry comments, I never see this brought up anymore on posts like these talking about using Chinese models through OpenRouter. This might be explained by "well they're different people", but the difference is very stark for that to be the whole explanation.

gmerc2 hours ago

Because they give it away for free and offer APIs at very acceptable rates. Not that hard to figure out, Robin Hood stealing our data tax back comes to mind.

deaux2 hours ago

GitHub is free.

notrealyme123an hour ago

User publishes to github => Copilot trains with GitHub data => MS Sells copilot => User workes for Microsoft (in the sense of giving it's labour for MS to make money)

User publishes to github => Deepseek trains with GitHub data => Deepseek gives model away for free => User did not work for Deepseek (in the sense of giving it's labour for Deepseek to make money)

arikrahmanan hour ago

Exactly, it's intuitively different.

pheggs4 hours ago

I am personally okay helping them as long as they publish the models and dont keep them closed. And I dont trust the settings where providers say they wont train on it.

dbeleyan hour ago

The cool thing about open-weights model is that you are free to use alternative providers that won't phone home to the original model creators.

I see 6 alternative providers listed on Openrouter for DeepSeek V4 Pro for example.

duskdozeran hour ago

What do you mean specifically? Data passed through OpenRouter? Or that they too indiscriminately ingest data all over the web? If the former, I assume it's just that anyone still using them just doesn't care where the data comes from. If the latter, well, it seems like every day there's some news on some new model from somewhere, and it takes dedication to complain every time. There's also the factor that I believe DeepSeek is more open with the model, while others keep it entirely proprietary, which feels fairer and (personally) is also less offensive.

edg500026 minutes ago

My policy is that I don't allow agents to access all code. Some of it is shielded behind bind mounts. Maybe this is a pathetic, artisanal (or ego-driven), reaction of mine to the inevitable. I allow them to work on about 90% of the code (most codebases fully), with some code being considered too valuable to expose to the vendor. When data is involved, LLMs only get to see anonymized data.

This cute policy of mine won't affect anything though. The more we use the models, the more the models will replace this kind of work. Centralisation of power is inevitable; in Medival Europe, we used to have state & church ruling. In modern times but before the internet, it was probably state and banks. Maybe with ongoing digitization (bank offices disappearing) making banks less costly to operate; combined with with bank bailouts, maybe govenments will fully nationalize or at least banks will consolidate.

Then the AI companies will consolidate with the internet information and communication companies (Google/Meta for the US, and Alibaba/Tencent for China). Maybe we'll end up with a few de-facto governmental megacorps that rule in tandem and close cooperation with the formal government, who might handle mostly infra, utilities and the army. The megacorp would control narrative more and take more of a paternal role (educating and protecting the citizens, normally handled by formal governments).

Does this make sense?

prism563 hours ago

If the data is opensource on github, then in my opinion it should be fair game.

notrealyme123an hour ago

Things being public should not be enough. just because someone leaked your medical information to the public via a data breach should not make it fair game. There should be some rules.

prism56an hour ago

I feel that's a false dichotomy. The code on github is freely available for people to read and learn from, leaked medical data isn't.

prism56an hour ago

I feel that's a flase dichotomy. The code visible on github is freely available for anyone to read and learn from.

ozgrakkurt2 hours ago

IMO this is unfair for GPL or similarly licensed code.

Seems ok for MIT like licensed code though

edg500024 minutes ago

I think AI will create an open source dark age. Gradually, we'll see a lot less new good open source code. A gradual shift back to the proprietary world. Simmilar to the 1950-1990 period.

ForHackernewsan hour ago

It's totally fair to use GPL code, it just means all the models built by Anthropic, OpenAI, etc. using GPL-licensed source are themselves bound by the GPL. Plus, any works created downstream using those AI tools.

We're on the verge of a golden age of software as soon as someone finds a court with courage.

duskdozeran hour ago

Ah, you have much more faith in the legal system than I do. It's nice to dream, though.

antiloper3 hours ago

AWS Bedrock has DeepSeek models running on their infrastructure. That should be enough to prevent training on user data (there's a markup compared to DeepSeek's pricing though).

And unfortunately AWS doesn't have prepaid billing, so you can't just give the internet access to your API key without getting FinDDoS'd.

ThreatSystems21 minutes ago

If anyone is looking for a solution in this space. Fire me an email, I have a partner whose focussed closely on that problem set!

deaux3 hours ago

The latest one available for serverless inference looks to be from 8 months (Deepseek v3.1), which is an eternity and far behind.

raincolean hour ago

Two factors. First is anti-americanism (or at least anti-american-capitalism).

But the more important one is the social contract. Github came far before LLM era. The branding around it is being the storage of open source projects and many users want to it stay away from AI hype. You won't expect LLM providers to stay away from AI hype (duh) so it's less an issue for them.

stavrosan hour ago

If they give me the resulting model in the end, they can train on my data all they want. Hell, I'll send them more of it.

rsanek26 minutes ago

I'm not sure I'd call it "almost on the frontier," but I do think that v4 Pro is the most usable coding model I've seen out of China. I've used it via Ollama Cloud (coding) and OpenRouter (data processing). Feels Sonnet-level to me -- solid at implementation when given a specification, but falls a good bit short of Opus 4.7 max thinking when planning out larger changes or when given open-ended prompts.

zozbot2346 minutes ago

Keep in mind that DeepSeek has a max thinking mode of its own in the API.

edg500019 minutes ago

Has anybody used V4 hard, for the most challenging tasks (agentically, locally)? It's so hard to compare without putting serious time in it. Like spending a year daily with the model.

sylware3 minutes ago

If I want to run 'coding prompts' running the biggest deepseek model on CPU, what is the order of time I will have wait, hours, days?

holysantamaria4 hours ago

From the pricing page of deepseek:

(3) The deepseek-v4-pro model is currently offered at a 75% discount, extended until 2026/05/31 15:59 UTC.

Was this taken into account when reviewing the model?

gmerc2 hours ago

obviously everyone subsidizes for user acquisition - after all people need to be coaxed to test your model, claude code subscriptions come to me one.

DeepSeek pro is 65/86% cheaper (i/o tokens) in subsidized pro vs pro and 91/97% cheaper with current subsidies.

Flash vs Sonnet 4.6 is 95/98%

cyber_kinetist3 hours ago

Yeah even the Chinese open models have a problem that inference costs for these aren't that cheap. The only way out for the AI bubble collapse is simply more efficient hardware at lower costs and infrastructure setup downtime.

gmerc2 hours ago

It’s just an introduction price to speed up adoption for the rest of the month, hardly worth mentioning compared to subsidized coding plans.

We know DS runs profitable, they also indicate in their paper they expect prices to drop as they get access to the next gen Huawei cards.

KronisLV9 hours ago

I'm currently paying for Anthropic's Max subscription (the 100 USD one) and I quite often hit or approach the 5 hour limits, but usually get to around 60-80% of the weekly limits before they reset (Opus 4.7 with high thinking for everything, unless CC decides to spawn sub-agents with Haiku or something).

Those tokens are heavily subsidized, but DeepSeek's API pricing is looking really good. For example, with an agentic coding setup (roughly 85% input, 15% output and around 90% cache reads) I'd get around 150M tokens per month for the same 100 USD. Even at more output tokens and worse cache performance, it'd still most likely be upwards of 100M.

aitchnyu2 hours ago

What would be the non-subsidized price for a V4 api? Can it be priced 3x cheaper than bigger models? In Openrouter, this 1600B param model costs 0.4$. Whereas Kimi 2.6, 1000B params is 0.7; GLM 5.1, 754B params is 1.0$.

KronisLVan hour ago

Here’s their pricing docs, they’re running a discount for now https://api-docs.deepseek.com/quick_start/pricing/

The 150M assumption of mine is for 100 USD at the regular prices (though even that needs sufficient cache hits). Anthropic subsidizes way more per-token I think, though.

try-working8 hours ago

Someone on Twitter got >200M tokens for around $10 at the current pricing level

rvz6 hours ago

So it begins.

myaccountonhn4 hours ago

I recently switched from Claude to Opencode Go + pi.dev. It has Deepseek v4 pro along with Kimi K2.6, and it's performing quite well for basic coding, without hitting any limits.

raincolean hour ago

The V3/R1 time and now are in such contrast. V3/R1 were hyped hard and barely usable for coding. V4 is much less hyped but (anecdotally) it has completely demolished all the Flash/Lite/Spark models.

zozbot234an hour ago

Huh? R1 was one of the earliest openly available MoE and reasoning models, that's definitely not "hype". People tried to do reasoning before by asking the model to "think it through step by step" but that was a hack. The later V3.1 and V3.2 releases AIUI unified reasoning/non-reasoning use under a single model.

jdasdf11 hours ago

I've been using v4 pro for the past few days and honestly in terms of quality it seems more or less on par with open AIs 5.4 or opus 4.6 (i havent tried 4.7)

To be clear, i'm not doing state of the art stuff. I mostly used it for frontend development since i'm not great at that and just need a decent looking prototype.

But for my purposes it's a perfectly good model, and the price is decent.

I can't wait for open model small enough for me to run locally come out though. I hate having to rely on someone elses machines (and getting all my data exfiltrated that way)

enochthered9 hours ago

Thanks for sharing your experience, I’m looking to try it out.

Which provider are you using for inference? Opencode or the DeepSeek api?

teruakohatu11 hours ago

The pelican is really getting old as an a standalone evaluation metric. By now they are certainly going to be in training set if not explicitly tuned to produce it for the press on HN alone.

Keep the pelican but isn’t it time to add something else more novel that all current and past models struggle with?

caseyf75 hours ago

It also seems like all of the models have converged on very similar images.

justinclift10 hours ago

taffydavid3 hours ago

I tried deepseek v4 through open code at the weekend. I'm a daily Claude/Claude code user.

I tried to build something simple and while it got the job done the thinking displayed did not fill me with confidence. It was pages and pages of "actually no", "hang on", "wait that makes no sense". It was like the model was having a breakdown.

Bear in mind open code was also new to me so I could be just seeing thinking where I usually don't

edg500016 minutes ago

I feel the reasoning might be tuned for hard questions and not agentic work. I feel it overthinks, good for a very hard question, not for small incremental agentic steps. In theory, disabling thinking and using really well formed instruction, forcing it to still emit a bunch of tokens each step prior to taking action, could help. Only one way to find out though.

kay_o9 minutes ago

Before CC and Codex removed thinking/verbose and hid most of it, both do that .

Jtarii2 hours ago

I see similar things using GLM 5.1 in pi.

I had to turn off thinking traces because it was just giving me anxiety looking at it.

atoav3 hours ago

> Bear in mind open code was also new to me so I could be just seeing thinking where I usually don't

Well there's your problem.

Edit: I remember seeing similar things with ChatGPT or Codex, although I can't remember in which context.

alasano9 hours ago

I tweeted about some implementation and review runs that used V4 Pro.

Even without the currently discounted pricing, the value is incredible.

It takes about twice as long to finish code reviews given an identical context compared to opus 4.7/gpt 5.5 but at 1/10 the cost of less, there's just no comparison.

https://twitter.com/aljosa/status/2049176528638902555

swingboy29 minutes ago

Did you do this test through OpenRouter?

chaosprintan hour ago

I doubt if those models already knew this pelican test...

Tarcroi17 hours ago

[dead]

hn-front (c) 2024 voximity
source