Hacker News

whycombinetor
Ask HN: Former grok-code-fast-1 users, what coding model are you using now?

I get good, cheap, fast feature coding success with grok-4.1-fast for planning and grok-code-fast-1 for execution. But according to the Openrouter usage stats, grok-code-fast-1 is now old hat - usage dropped off a cliff in mid-Feb.

What are you using now? (That's around the same price per token) (AND also blazing fast (100+tps)? is that possible?)

(I'm finding Minimax M2.5 usable, but I think I still prefer grok-code-fast-1. Opus rocks but is expensive - good to have in the quiver for planning a big refactor.)


sgm_ro6 days ago

Those groks are quite old by now, and not impressive. Minimax M2.7 is a big upgrade vs 2.5, at a similar price, should easily beat the grok. GLM 5.1 is also a great value (with coding plans), around claude sonnet capability.

But if you are doing a lot of coding your best value will be a coding plan, not pay per token. Gpt plus is a really good value and its 5.4-high model is super capable, some even consider it better than opus.

I don't think it's worth getting less than state of the art models to do actual coding, if the code will actually be used. Costs a bit more but will save you a lot of time in the end, and will produce a better quality product.

whycombinetorop6 days ago

For real work I'm already on Opus full time (plan+execute). I think there's a different space for cheap-but-good-enough options for small-to-med repo hobby projects. (Not enough usage to warrant a plan.) And Opus is only ~30 tok/s while grok-code-fast-1 is ~120. So planning with Opus and executing with grok-code-fast-1 is still on the table for me - it's good at following explicitly written plans, lightning fast.

faangguyindia6 days ago

Pay for codex and copilot.

It will be much cheaper and if you want to experiment than minimax model

hn-front (c) 2024 voximity
source