Hacker News

todsacerdoti
RISC-V Is Sloooow marcin.juszkiewicz.com.pl

kashyapc9 hours ago

A couple of corrections (the blog-post is by a colleague, but I'm not speaking for Marcin! :))

First, we do have a recent 'binutils' build[1] with test-suites in 67 minutes (it was on Milk-V "Megrez") in the Fedora RISC-V build system. This is a non-trivial improvement over the 143-minute build time reported in the blog.

Second, the current fastest development machine is not Banana Pi BPI-F3. If we consider what is reasonably accessible today, it is SiFive "HiFive P550" (P550 for short) and an upcoming UltraRISC "DP1000", we have access to an eval board. And as noted elsewhere in this thread, in "several months" some RVA23-based machines should be available. (RVA23 == the latest ISA spec).

FWIW, our FOSDEM talk from earlier this year, "Fedora on RISC-V: state of the arch"[1], gives an overview of the hardware situation. It also has a couple of related poorman's benchmarks (an 'xz' compression test and a 'binutils' build without the test-suite on the above two boards -- that's what I could manage with the time I had).

Edit: Marcin's RISC-V test was done on StarFive "Vision Five 2". This small board has its strengths (upstreamed drivers), but it is not known for its speed!

[1] https://riscv-koji.fedoraproject.org/koji/taskinfo?taskID=91...

[2] Slides: https://fosdem.org/2026/events/attachments/SQGLW7-fedora-on-...

rbanffya day ago

Don't blame the ISA - blame the silicon implementations AND the software with no architecture-specific optimisations.

RISC-V will get there, eventually.

I remember that ARM started as a speed demon with conscious power consumption, then was surpassed by x86s and PPCs on desktops and moved to embedded, where it shone by being very frugal with power, only to now be leaving the embedded space with implementations optimised for speed more than power.

newpavlova day ago

In some cases RISC-V ISA spec is definitely the one to blame:

1) https://github.com/llvm/llvm-project/issues/150263

2) https://github.com/llvm/llvm-project/issues/141488

Another example is hard-coded 4 KiB page size which effectively kneecaps ISA when compared against ARM.

weebull21 hours ago

All of those things are solved with modern extensions. It's like comparing pre-MMX x86 code with modern x86. Misaligned loads and stores are Zicclsm, bit manipulation is Zb[abcs], atomic memory operations are made mandatory in Ziccamoa.

All of these extensions are mandatory in the RVA22 and RVA23 profiles and so will be implemented on any up to date RISC-V core. It's definitely worth setting your compiler target appropriately before making comparisons.

LeFantome20 hours ago

Ubuntu being RVA23 is looking smarter and smarter.

The RISC-V ecosystem being handicapped by backwards compatibility does not make sense at this point.

Every new RISC-V board is going to be RVA23 capable. Now is the time to draw a line in the sand.

saagarjha11 hours ago

I’d be kind of depressed if every new RISC-V board was not RVA23 capable.

cmovq17 hours ago

But RISC-V is a _new_ ISA. Why did we start out with the wrong design that now needs a bunch of extensions? RISC-V should have taken the learnings from x86 and ARM but instead they seem to be committing the same mistakes.

kldg13 hours ago

I was a bit shocked by headline, given how poorly ARM and x86 compares to RISC-V in speed, cost, and efficiency ... in the MCU space where I near-exclusively live and where RISC-V has near-exclusively lived up until quite recently. RISC-V has been great for RTOS systems and Espressif in particular has pushed MCUs up to a new level where it's become viable to run a designed-from-scratch web server (you better believe we're using vector graphics) on a $5 board that sits on your thumb, but using RISC-V in SBCs and beyond as the primary CPU is a very different ballgame.

galangalalgol7 hours ago

I have a couple c3 I was playing with. Are you talking about the P4 or C6? Aren't their xtensa offerings still faster?

sehugg7 hours ago

It's not the wrong design; RISC-V is designed around extensions, and they left room in the instruction encoding for them. They don't have a 800-lb gorilla like Intel shoving the ISA down customers' throats (Canonical is the closet thing) so there is some debate on which combination of extensions are needed for desktop apps.

rwmj6 hours ago

FWIW I wrote this article a while back all about RISC-V extensions and how they work at a low level: https://research.redhat.com/blog/article/risc-v-extensions-w... page 22 in this PDF: https://research.redhat.com/wp-content/uploads/2023/12/RHRQ_...

Joker_vD6 hours ago

> They don't have a 800-lb gorilla like Intel shoving the ISA down customers' throats

Nobody really forces you to use x64 if you don't like it, just as nobody forced you to use Itanium — which Intel famously failed to "shove down the customers' throats" btw.

wolvoleo17 hours ago

It is a reduced instruction set computing isa of course. It shouldn't really have instructions for every edge case.

I only use it for microcontrollers and it's really nice there. But yeah I can imagine it doesn't perform well on bigger stuff. The idea of risc was to put the intelligence in the compiler though, not the silicon.

Joker_vD6 hours ago

> It shouldn't really have instructions for every edge case.

Depends on what the instruction does. If it goes through a four-loads-four-stores chain that VAXen could famously do (with pre- and post-increments), then sure, this makes it impossible to implements such ISA in a multiscalar, OOO manner (DEC tried really, really hard and couldn't do it). But anything that essentially bit-fiddles in funny ways with the 2 sets of 64 bits already available from the source registers, plus the immediate? Shove it in, why not? ARM has bit shifted immediates available for almost every instruction since ARMv1. And RISC-V also finally gets shNadd instructions which are essentially x86/x64's SIB byte, except available as a separate instruction. It got "andn" which, arguably, is more useful than pure NOT anyway (most uses of ~ in C are in expressions of "var &= ~expr..." variety) and costs almost nothing to implement. Bit rotations, too, including rev8 and brev8. Heck, we even got max/min instructions in RISC-V because again, why not? The usage is incredibly widespread, the implementation is trivial, and makes life easier both for HW implementers (no need to try to macrofuse common instruction sequences) and the SW writers (no need to neither invents those instruction sequences and hope they'll get accelerated nor read manufacturers datasheets for "officially" blessed instruction sequences).

pjmlp13 hours ago

As proven by x86/x64 and ARM evolution, being all in into pure RISC doesn't pay off, because there is only so much compilers can do in a AOT deployment scenario.

blacklion7 hours ago

> The idea of risc was to put the intelligence in the compiler though, not the silicon.

Itanium did this mistake. Sure, compilers are much better now, but still dynamic scheduling beats static one for real-world tasks. You can (almost perfectly) statically schedule matrix multiplication but not UI or 3D game.

Even GPUs have some amount of dynamic scheduling now.

hun317 hours ago

It was kind of an experiment from start. Some ideas turned out to be good, so we keep them. Some ideas turned out not to be good, so we fix them with extensions.

pjmlp13 hours ago

The problem with hardware expirements is that people owning the hardware are stuck with experiments.

nsvd29 hours ago

Sure, but if you bought a dev board with an experimental ISA I think you knew what you were getting in to.

rbanffy11 hours ago

If your hardware is new, you get the nicest extensions though. You just don’t use the bad parts in your code.

pjmlp11 hours ago

Sure, if you are developing software for the computer you own, instead of supporting everyone.

ahartmetz8 hours ago

I mean, that is often what you do in embedded computing: you (re)sell hardware with one particular application.

Symmetry5 hours ago

It's hard to imagine a student putting together a RVA23 core in a single semester. And you don't really want that in the embedded roles RISC-V has found a lot of success in either.

veltas11 hours ago

Relatively new, we're about 16 years down the road.

pajko11 hours ago

Intentionally. Back then the guys were telling that everything could be solved by raw power.

[deleted]12 hours agocollapsed

sidewndr4618 hours ago

You're correct but I guess my thoughts are if we're going to wind up with a mess of extensions, why not just use x86-64?

LeFantome15 hours ago

First, x86-64 also has “extensions” such as avx, avx2, and avx512. Not all “x86-64” CPUs support the same ones. And you get things like svm on AMD and avx on Intel. Remember 3DNow?

X86-64 also has “profiles” which tell you what extensions should be available. There is x86-64v1 and x86-64v4 with v2 and v3 in the middle.

RVA23 offers a very similar feature-set to x86-64v4.

You do not end up with a mess of extensions. You get RVA23. Yes, RVA23 represents a set of mandatory extensions. The important thing is that two RVA23 compliant chips will implement the same ones.

But the most important point is that you cannot “just use x86-64”. Only Intel and AMD can do that. Anybody can build a RISC-V chip. You do not need permission.

sidewndr467 hours ago

It's actually worst because intel is introducing APX now as well.

BoredomIsFun12 hours ago

1. Yes, but most of the code would run on anything older than 2007. 20 years of stable ISA.

2. Also, fundamentally all modern CPUs are still 64-bit version of 80386. MMU, protection, low level details are all same.

sidewndr467 hours ago

This isn't really accurate, lots of commercial software is now compiled for newer x86 64 extensions.

If you're using OSS it doesn't really matter as you can compile it for whatever you want.

NetMageSCW5 hours ago

No, you really can’t. For some OSS, on hardware that has an OS supported by that software, with a compiler that supports that target and the options you want, and in some cases where the OSS has been written to support those options, you can compile it. Otherwise you are just out of luck.

sidewndr463 hours ago

I don't really understand your position here. Compiler availability isn't really that big of a deal, even on obscure or proprietary platforms. Why would there be "some cases where the OSS has been written to support those options"?

NetMageSCW5 hours ago

>Anybody can build a RISC-V chip. You do not need permission.

No, anybody can’t build a RISC-V chip. That’s the same mistake OSS proponents make. Just because something is open source doesn’t mean bugs will be found. And just because bugs are found doesn’t mean they will be fixed. The vast majority of people can’t do either.

The number of people who can design a chip implementation of the RISC-V ISA is much, much smaller, and the number who can get or own a FAB to manufacture the chips smaller still. You don’t need permission to use the ISA, but that is not the only gate.

craftkiller5 hours ago

I think it was clear that they were saying anybody is permitted to build a RISC-V chip, not that anybody has the skills.

> The number of people who can design a chip implementation

Thankfully you don't have to start from scratch. There are loads of open source RISC-V chip implementations you can start from.

> get or own a FAB to manufacture the chips

There is always FPGAs and also this:

https://fossi-foundation.org/blog/2020-06-30-skywater-pdk

whaleofatw202218 hours ago

Because the ISA is not encumbered the way other ISAs are legally, and there are use cases where the minimal profile is fine for the sake of embedded whatever vs the cost to implement the extensions

computably17 hours ago

> why not just use x86-64?

Uh, because you can't? It's not open in any meaningful sense.

userbinator16 hours ago

The original amd64 came out in 2003. Any patents on the original instruction set have long expired, and even more so for 32-bit x86.

panick21_12 hours ago

Its not about patents. Believe what you want but there is a reason nobody else is doing x86 or ARM chips unless they are allowed by the owner.

dbdr10 hours ago

You're probably right. It would be helpful to say what the reason is, if it's not patents.

panick21_9 hours ago

I'm not a lawyer but I would assume its copyright. Kind of like API in software. In software somehow this does not apply most of the time. But it seems in hardware this is very real. But I would appreciate a lawyer jumping in.

I know for example that Berkley when thinking pre-RISC-V that they had a deal with Intel about using x86-64 for research. But they were not able to share the designs.

MarsIronPI7 hours ago

I don't know why there aren't independent X86-64 manufacturers. Patents on the extensions maybe? But as I understand copyright, APIs can't be copyrighted so it's not that.

panick21_6 hours ago

The original ARM 32 stuff is clearly out of patents and is not being copied. And it doesn't require new extensions to be commercially viable.

newpavlov20 hours ago

>Misaligned loads and stores are Zicclsm

Nope. See https://github.com/llvm/llvm-project/issues/110454 which was linked in the first issue. The spec authors have managed to made a mess even here.

Now they want to introduce yet another (sic!) extension Oilsm... It maaaaaay become part of RVA30, so in the best case scenario it will be decades before we will be able to rely on it widely (especially considering that RVA23 is likely to become heavily entrenched as "the default").

IMO the spec authors should've mandated that the base load/store instructions work only with aligned pointers and introduced misaligned instructions in a separate early extension. (After all, passing a misaligned pointer where your code does not expect it is a correctness issue.) But I would've been fine as well if they mandated that misaligned pointers should be always accepted. Instead we have to deal the terrible middle ground.

>atomic memory operations are made mandatory in Ziccamoa

In other words, forget about potential performance advantages of load-link/store-conditional instructions. `compare_exchange` and `compare_exchange_weak` will always compile into the same instructions.

And I guess you are fine with the page size part. I know there are huge-page-like proposals, but they do not resolve the fundamental issue.

I have other minor performance-related nits such `seed` CSR being allowed to produce poor quality entropy which means that we have bring a whole CSPRNG if we want to generate a cryptographic key or nonce on a low-powered micro-controller.

By no means I consider myself a RISC-V expert, if anything my familiarity with the ISA as a systems language programmer is quite shallow, but the number of accumulated disappointments even from such shallow familiarity has cooled my enthusiasm for RISC-V quite significantly.

pseudohadamard10 hours ago

RISC-V truly is the RyanAir of processors: Oh, you want FP maths? That's an optional extra, did you check that when you booked? And was that single or double-precision, all optional extras at an extra charge. Atomic instructions, that's an extra too, have your credit card details handy. Multiply and divide? Yeah, extras. Now, let me tell you about our high-end customer options, packed SIMD and user-level interrupts, only for business class users. And then there's our first-class benefits, hypervisor extensions for big spenders, and even more, all optional extras.

craftkiller4 hours ago

Then x86_64 is the cable television service of processors. "Oh, you want channel 5? Then you have to buy this bundle with 40 other channels you will never watch, including 7 channels in languages you do not speak."

fancyfredbot7 hours ago

So it's modular. This is normally considered a good thing. It means you don't have to pay for features you don't need.

The ISA is open so there's no greedy corporation trying to upsell you. I mean there's an implementation and die area cost for each extension but it's not being set at an artificial level by a monopolist.

Symmetry5 hours ago

It's a good thing in many cases but not if you're going to be running applications distributed as binaries. Maybe if we go the Gentoo route of everybody always recompiling everything for their own system?

snvzz4 hours ago

Then you stick to RVA23, which is comparable to ARMv9 and x86-64v4.

NetMageSCW5 hours ago

But that means a port of Linux can’t be to RISC-V, it has to be to a specific implementation of RISC-V, or if sufficient (which seems still debatable) to a specific common RISC-V profile.

fancyfredbot3 hours ago

You can target the minimum instruction set and it'll run everywhere. Albeit very slowly. Perhaps you use a fat binary to get reasonable performance in most cases.

This isn't easy but it can be done (and it is being done on x86, despite constantly evolving variations of AVX).

newpavlov9 hours ago

>Multiply and divide

And where it actually mattered they did not introduce a separate extension. Integer division is significantly more complex than multiplication, so it may make sense for low-end microcontrollers to implement in hardware only the latter.

dzaima7 hours ago

There is Zmmul for multiplication-but-not-divide.

prompt_artisan4 hours ago

Yes, adding instructions to your ISA has a cost

IshKebab12 hours ago

I think having separate unaligned load/store instructions would be a much worse design, not least because they use a lot of the opcode space. I don't understand why you don't just have an option to not generate misaligned loads for people that happen to be running on CPUs where it's really slow. You don't need to wait for a profile for that.

As for `seed`, if you're running on a microcontroller you can just look up the data sheet to see if it's seed entropy is sufficient. By the time you get to CPUs where portable code is important a CSPRNG is probably fine.

I agree about page size though. Svnapot seems overly complicated and gives only a fraction of the advantages of actually bigger pages.

newpavlov9 hours ago

>As for `seed`, if you're running on a microcontroller you can just look up the data sheet to see if it's seed entropy is sufficient.

It's a terrible attitude to have towards programmers, but looking at misaligned ops, I guess we can see a pattern from RISC-V authors here.

Most programmers do not target a concrete microcontroller and develop every line of code from scratch. They either develop portable libraries (e.g. https://docs.rs/getrandom) or build their projects using those libraries.

The whole raison d'être of an ISA is to provide a portable contract between hardware vendors and programmers . RISC-V authors shirk this responsibility with "just look at your micro specs, lol" attitude.

dzaima11 hours ago

The option to generate or not generate misaligned loads/stores does exist (-mno-strict-align / -mstrict-align). But of course that's a compile-time option, and of course the preferred state would be to have use of them on by default, but RVA23 doesn't sufficiently guarantee/encourage them not being unreasonably-slow, leaving native misaligned loads/stores still effectively-unusable (and off by default on clang/gcc on -march=rva23u64).

aka, Zicclsm / RVA23 are entirely-useless as far as actually getting to make use of native misaligned loads/stores goes.

camel-cdr8 hours ago

The cursed thing is that RVA23 does basically guarantees that `vle8.v` + `vmv.x.s` on misaligned addresses is fast.

dzaima7 hours ago

Yeah, that is quite funky; and indeed gcc does that. Relatedly, super-annoying is that `vle64.v` & co could then also make use of that same hardware, but that's not guaranteed. (I suppose there could be awful hardware that does vle8.v via single-byte loads, which wouldn't translate to vle64.v?)

IshKebab10 hours ago

> RVA23 doesn't guatantee them not being unreasonably-slow

Right but it doesn't guarantee that anything is unreasonably slow does it? I am free to make an RVA23 compliant CPU with a div instruction that takes 10k cycles. Does that mean LLVM won't output div? At some point you're left with either -mcpu=<specific cpu> and falling back to reasonable assumptions about the actual hardware landscape.

Do ARM or x86 make any guarantees about the performance of misaligned loads/stores? I couldn't find anything.

dzaima10 hours ago

I don't think x86/ARM particularly guarantee fastness, but at least they effectively encourage making use of them via their contributions to compilers that do. They also don't really need to given that they mostly control who can make hardware anyway. (at the very least, if general-purpose HW with horribly-slow misaligned loads/stores came out from them, people would laugh at it, and assume/hope that that's because of some silicon defect requiring chicken-bit-ing it off, instead of just not bothering to implement it)

Indeed one can make any instruction take basically-forever, but I think it's a fairly reasonable expectation that all supported hardware instructions/behaviors (at least non-deprecated ones) are not slower than a software implementation (on at least some inputs), else having said instruction is strictly-redundant.

And if any significant general-purpose hardware actually did a 10k-cycle div around the time the respective compiler defaults were decided, I think there's a good chance that software would have defaulted to calling division through a function such that an implementation can be picked depending on the running hardware. (let's ignore whether 10k-cycle-division and general-purpose-hardware would ever go together... but misaligned-mem-ops+general-purpose-hardware definitely do)

IshKebab9 hours ago

> if general-purpose HW with horribly-slow misaligned loads/stores came out from them

How is that different for RISC-V?

> I think it's a fairly reasonable expectation that all supported hardware instructions/behaviors (at least non-deprecated ones) are not slower than a software implementation

I agree! So just use misaligned loads if Zicclsm is supported. As you observed there's a feedback loop between what compilers output and what gets optimised in hardware. Since RVA23 hardware is basically non-existent at the moment you kind of have the opportunity to dictate to hardware "LLVM will use misaligned accesses on RVA23; if you make an RVA23 chip where this is horribly slow then people will laugh at you and assume it's some sort of silicon defect".

dzaima9 hours ago

> How is that different for RISC-V?

RISC-V hardware with slow misaligned mem ops does exist to non-insignificant extent, and it seems not enough people have laughed at them, and instead compilers did just surrender and default to not using them.

> As you observed there's a feedback loop between what compilers output and what gets optimised in hardware.

Well, that loop needs to start somewhere, and it has already started, and started wrong. I suppose we'll see what happens with real RVA23 hardware; at the very least, even if it takes a decade for most hardware to support misaligned well, software could retroactively change its defaults while still remaining technically-RVA23-compatible, so I suppose that's good.

newpavlov9 hours ago

>So just use misaligned loads if Zicclsm is supported.

LLVM and GCC developers clearly disagree with you. In other words, re-iterating the previously raised point: Zicclsm is effectively useless and we have to wait decades for hypothetical Oilsm.

Most programmers will not know that the misaligned issue even exists, even less about options like -mno-strict-align. They just will compile their project with default settings and blame RISC-V for being slow.

RISC-V could've easily avoided all this mess by properly mandating misaligned pointer handling as part of the I extension.

dzaima8 hours ago

Well, we don't necessarily have to wait for Oilsm; software that wants to could just choose to be opinionated and run massively-worse on suboptimal hardware. And, of course, once Oilsm hardware becomes the standard, it'd be fine to recompile RVA23-targeting software to it too.

> RISC-V could've easily avoided all this mess by properly mandating misaligned pointer handling as part of the I extension.

Rather hard to mandate performance by an open ISA. Especially considering that there could actually be scenarios where it may be necessary to chicken-bit it off; and of course the fact that there's already some questionability on ops crossing pages, where even ARM/x86 are very slow.

newpavlov6 hours ago

I am not saying that RISC-V should mandate performance. If anything, we wouldn't had the problem with Zicclsm if they did not bother with the stupid performance note.

I would be fine with any of the following 3 approaches:

1) Mandate that store/loads do not support misaligned pointers and introduce separate misaligned instructions (good for correctness, so its my personal preference).

2) Mandate that store/loads always support misaligned pointers.

3) Mandate that store/loads do not support misaligned pointers unless Zicclsm/Oilsm/whatever is available.

If hardware wants to implement a slow handling of misaligned pointers for some reason, it's squarely responsibility of the hardware's vendor. And everyone would know whom to blame for poor performance on some workloads.

We are effectively going to end up with 3, but many years later and with a lot of additional unnecessary mess associated with it. Arguably, this issue should've been long sorted out in the age of ratification of the I extension.

dzaima5 hours ago

2 is basically infeasible with RISC-V being intended for a wide range of use-cases. 1 might be ok but introduces a bunch of opcode space waste.

Indeed extremely sad that Zicclsm wasn't a thing in the spec, from the very start (never mind that even now it only lives in the profiles spec); going through the git history, seems that the text around misaligned handling optionality goes all the way back to the very start of the riscv/riscv-isa-manual repo, before `Z*` extensions existed at all.

More broadly, it's rather sad that there aren't similar extensions for other forms of optional behavior (thing that was recently brought up is RVV vsetvli with e.g. `e64,mf2`, useful for massive-VLEN>DLEN hardware).

newpavlov4 hours ago

>1 might be ok but introduces a bunch of opcode space waste.

I wouldn't call it "waste". Moreover, it's fine for misaligned instructions to use a wider encoding or be less rich than their aligned counterparts. For example, they may not have the immediate offset or have a shorter one. One fun potential possibility is to encode the misaligned variant into aligned instructions using the immediate offset with all bits set to one, as a side effect it also would make the offset fully symmetric.

dzaima2 hours ago

Of course that'd result in entirely-avoidable slowdown for the potentially-misaligned ops. Perhaps fine for a program that doesn't use them frequently, but quite bad for ones that need misaligned ops everywhere.

In terms of correctness, there's also the possibility of partially-misaligned ops (e.g. an 8B load with 4B alignment, loading two adjacent int32_t fields) so you're not handling everything with correct faults anyways.

camel-cdr8 hours ago

Exactly, I 100% agree, and IMO toolchains should default to assuming fast misaligned load/store for RISC-V.

However, the spec has the explicit note:

> Even though mandated, misaligned loads and stores might execute extremely slowly. Standard software distributions should assume their existence only for correctness, not for performance.

Which was a mistake. As you said any instruction could be arbitrarily slow, and in other aspects where performance recommendations could actually be useful RVI usually says "we can't mandate implementation".

saagarjha11 hours ago

RISC-V is not particularly good at using opcode space, unfortunately.

IshKebab10 hours ago

I don't think it's too bad. The compressed extension was arguably a mistake (and shouldn't be in RVA23 IMO), but apart from that there aren't any major blunders. You're probably thinking about how JAL(R) basically always uses x1/x5 (or whatever it is), but I don't think that's a huge deal.

About 1/3 of the opcode space is used currently so there's a decent amount of space left.

edflsafoiewq20 hours ago

What about page size?

rwmj11 hours ago

RISC-V has the Svnapot extension for large page sizes https://riscv.github.io/riscv-unified-db/manual/html/isa/isa...

ori_b19 hours ago

It's 4k on x86 as well. Doesn't seem to hurt so bad -- at least, not enough to explain the risc-v performance gap.

twoodfin18 hours ago

Hmm? x86 has supported much larger “huge” page sizes for ages.

ori_b15 hours ago

Yes, and Linux. at least historically, has not used them without explicit program opt-in. Often advice is to disable transparent huge pages for performance reasons. Not sure about other operating systems.

See, for example, https://www.pingcap.com/blog/transparent-huge-pages-why-we-d...

jorvi13 hours ago

Huh, no? The usual advice is to enable THPs for performance, you only disable them in specific scenarios.

jabl9 hours ago

x86 has decades of knowhow and a zillion transistors to spend on making the memory pipeline, TLB caching & prefetching etc. etc. really really good. They work as well as they do despite the 4k base page size, not because of it.

If you'd start from a clean sheet today you'd probably end up with a somewhat bigger base page size. Not hugely larger though, as that wastes a lot of memory for most applications. Maybe 16k like some ARM chips use?

[deleted]20 hours agocollapsed

tosti15 hours ago

Regarding misaligned reads, IIRC only x86 hides non-aligned memory access. It's still slower than aligned reads. Other processors just fault, so it would make sense to do the same on riscv.

The problem is decades of software being written on a chip that from the outside appears not to care.

fredoralive11 hours ago

ARM Cortex-A cores also allow unaligned access (MCU cores don't though, and older ARM is weird). There's perhaps a hint if the two most popular CPU architectures have ended up in the forgiving approach to unaligned access, rather than the penalising approach of raising an interrupt.

torginus10 hours ago

Yes, unaligned loads/stores are a niche feature that has huge implications in processor design - loads across cache-lines with different residency, pages that fault etc.

This is the classic conundrum of legacy system redesign - if customers keep demanding every feature of the old system be present, and work the exact same then the new system will take on the baggage it was designed to get rid of.

The new implementation will be slow and buggy by this standard and nobody will use it.

0x000xca0xfe9 hours ago

Unaligned load/store is crucial for zero-copy handling of mmaped data, network streams and all other kinds of space-optimized data structures.

If the CPU doesn't do it software must make many tiny conditional copies which is bad for branch prediction.

This sucks double when you have variable length vector operations... IMO fast unaligned memory accesses should have been mandatory without exceptions for all application-level profiles and everything with vector.

torginus7 hours ago

I think you can do this fairly efficiently with SSE for x86 - SSE/AVX has shift and shuffle. Encoding/Decoding packed data might even be faster this way.

I'm not familiar with RISC-V but from what I've seen here, they're also trying to solve this similarly with vector or bit extraction instructions.

0x000xca0xfe7 hours ago

Yes because unaligned load is no problem with SSE/AVX. On my RISC-V OrangePi unaligned vector loads beyond byte-granularity fault so you have to take extra care.

AVX shift and shuffle is mostly limited to 128 bits unfortunately for historical reasons (even for 256-bit instructions) and hardware support for AVX512/AVX10 where they fixed that is a complete mess so it's hard to rely on when you care about backwards compatibility for consumer devices, e.g. in game development.

RISC-V vector has excellent mask/shuffle/permute but the performance in real silicon can be... questionable. See the timings for vrgather here for example: https://camel-cdr.github.io/rvv-bench-results/spacemit_a100/...

For working with packed data structures where fields are irregular/non-predictable/dependent on previous fields etc. unaligned load/store is a godsend. Last time I worked on a custom DB engine that used these patterns the generated x86 code was so much nicer than the one for our embedded ARM cores.

pjmlp13 hours ago

On modern CPUs, it used not to be something to care about in the past across 8, 16, 32 bit generations, outside RISC.

inkyoto13 hours ago

PDP-11, m68k – to name a few, did not allow misaligned access to anything that was not a byte.

Neither are RISC nor modern.

pjmlp11 hours ago

In regards to 68000 I don't remember, only used it during demoscene coding parties when allowed to touch Amiga from my friends.

I have only seen PDP-11 Assembly snippets in UNIX related books, wasn't aware of its alignment requirements.

inkyoto10 hours ago

PDP-11 was a major source of inspiration for m68k architecture designers. The influence can be seen in multiple places, starting from the orthogonal ISA design down to instruction mnemonics.

It is quite likely that not allowing the misaligned access was also influenced by PDP-11.

GoblinSlayer30 minutes ago

> 1) https://github.com/llvm/llvm-project/issues/150263

Huh? They have no idea what they are doing. If data is unaligned, the solution is memcpy, not compiler optimizations, also their hack of 17 loads is buffer overflow. Also not ISA spec problem.

adastra22a day ago

Also the bit manipulation extension wasn't part of the core. So things like bit rotation is slow for no good reason, if you want portable code. Why? Who knows.

adgjlsfhk1a day ago

> Also the bit manipulation extension wasn't part of the core.

This is primarily because core is primarily a teaching ISA. One of the best parts about RiscV is that you can teach a freshman level architecture class or a senior level chip building project with an ISA that is actually used. Anything powerful to run (a non built from source manually) linux will support a profile that bundles all the commonly needed instructions to be fast.

jacquesma day ago

Bit manipulation instructions are part and parcel of any curriculum that teaches CPU architecture. They are the basic building blocks for many more complex instructions.

https://five-embeddev.com/riscv-bitmanip/1.0.0/bitmanip.html

I can see quite a few items on that list that imnsho should have been included in the core and for the life of me I can't see the rationale behind leaving them out. Even the most basic 8 bit CPU had various shifts and rolls baked in.

rwmja day ago

This is the reason behind the profiles like RVA23 which include bitmanip, vector and a large number of other extensions. Real chips coming very soon will all be RVA23.

jacquesma day ago

Neat. I can't wait to get my hands on a devboard.

NekkoDroid21 hours ago

The earlierst I know of coming is the SpaceMit K3, which Sipeed will have dev boards for.

statusfailed20 hours ago

The Milk-V Jupiter 2 (coming out in April) is RV23 too

jacquesm20 hours ago

Nice board but very low on max RAM.

rwmj9 hours ago

The Milk-V Titan (https://milkv.io/titan) can take up to 64GB which is fine considering the number of cores and the cost of RAM. If you needed and could afford more RAM you'd be better off distributing the work across more than one board.

jacquesm6 hours ago

I simply want to replace my desktop with open hardware. That board would be fine, thank you for the pointer.

rwmj6 hours ago

Unfortunately they found a bug and had to redesign the boards. I've had one of these on pre-order since last year. Latest is I think they're intending to ship them next month (April).

The SpacemiT K3 (https://www.spacemit.com/products/keystone/k3 https://www.cnx-software.com/2026/01/23/spacemit-k3-16-core-...) is the one everyone is waiting for. We have one in house (as usual, cannot discuss benchmarks, but it's good). Unfortunately I don't think there is anyone reputable offering pre-orders yet.

jacquesm6 hours ago

Ok! I will keep an eye out. It is one of the most interesting developments for me hardware wise in the last decade, and I definitely want to show my support by buying one or more of the boards. Respin is always really annoying this late in, the post mortem on that must make for interesting reading.

You're super lucky to have your hands on one!

kevin_thibedeau21 hours ago

32-bit barrel shifters consume significant area and RISC-V was developed to support resource constrained low cost embedded hardware in a minimal ISA implementation.

pezezin19 hours ago

The 32-bit ARM architecture included a barrel shifter as part of its basic design, as in every instruction had a shift field.

If a CPU built in 1985 with a grand total of 26 000 transistors could afford it, I am pretty sure that anything built in this century could afford it too.

snvzz18 hours ago

26k is a lot of transistors for an embedded MCU.

You'd be excluding many small CPUs which exist within other chips running very specialized code.

As profiles mandate these instructions anyway, there's no good reason to complicate the most basic RISC-V possible.

RISC-V is the ISA for everything, from the smallest such CPUs to supercomputers.

wk_end17 hours ago

What MCUs are you thinking of?

To the best of my knowledge (and Google-fu), 26K really isn't a lot of transistors for an embedded MCU - at least not a fully-featured 32-bit one comparable to a minimal RISC-V core. An ARM Cortex M0, which is pretty much the smallest thing out there, is around 10K gates => around 40K transistors. This is also around the same size as a minimal RISC-V core AFAICT.

The ARM core has a shifter, though.

snvzz17 hours ago

There's reason RV32E and RV64E, with half the registers, are a thing. RV32I/RV64I isn't small enough.

There are many chips in the market that do embed 8051s for janitorial tasks, because it is small and not legally encumbered. Some chips have several non-exposed tiny embedded CPUs within.

RISC-V is replacing many of these, bringing modern tooling. There's even open source designs like SERV that fit in a corner of an already small FPGA, leaving room for other purposes.

wk_end17 hours ago

Per https://en.wikipedia.org/wiki/Transistor_count, even an 8051 has 50K transistors, which reinforces my claim that 26K really doesn't seem like a big ask for an MCU core. Whether that means a barrel shifter is worth it or not is a totally orthogonal question, of course.

(Although I do have to eat my words here - I didn't check that Wikipedia page, and it does actually list a ~6K RISC-V core! It's an experimental academic prototype "made from a two-dimensional material [...] crafted from molybdenum disulfide"; I don't know if that construction might allow for a more efficient transistor count and it's totally impractical - 1KHz clock speed, 1-bit ALU, etc. - for almost any purpose, but it is technically a RISC-V implementation significantly smaller than 26K)

userbinator16 hours ago

I don't know if that construction might allow for a more efficient transistor count and it's totally impractical - 1KHz clock speed, 1-bit ALU, etc. - for almost any purpose, but it is technically a RISC-V implementation significantly smaller than 26K

That sounds like a microcoded RISC-V implementation, which can really be done for any ISA at the extreme expense of speed.

inkyoto14 hours ago

If I'm not mistaken, microcode is a thing at least on Intel CPU's, and that is how they patched Spectre, Meltdown and other vulnerabilities – Intel released a microcode update that BIOS applies at the cold start and hot patches the CPU.

Maybe other CPU's have it as well, though I do not have enough information on that.

[deleted]7 hours agocollapsed

adgjlsfhk117 hours ago

> There's reason RV32E and RV64E, with half the registers, are a thing. RV32I/RV64I isn't small enough.

This is actually kind of counter to your point. The really tiny micro-controllers from the 80s only had 224 bits of registers. RV32E is at least twice that (16 registers*32 bits), and modern mcus generally use 2-4kbs of sram, so the overhead of a 32 bit barrel shifter is pretty minimal.

adgjlsfhk120 hours ago

IIUC this is a lot less true in the modern era. Even with 24nm transistors (the cheapest transistor last time I checked), modern microcontrollers have a fairly big transistor budget for the core (since 80+% of the transistors are going to sram anyway).

torginus10 hours ago

It was the case even 15 years ago when Cortex M0/M3 really started to get traction, that the processor area of ARM cores was small enough to not make a difference in practice.

jacquesm21 hours ago

You can save a lot of silicon by doing 8 or 16 bit shifters and then doing the rest at the code generation level. Not having any seems really anemic to me.

bmenrigh16 hours ago

Yeah I don’t get it. Shifts and rolls are among the simplest of all instructions to implement because they can be done with just wires, zero gates. Hard to imagine a justification for leaving them out.

hackyhackya day ago

> One of the best parts about RiscV is that you can teach a freshman level architecture class or a senior level chip building project with an ISA that is actually used.

Same could be said of MIPS.

My understanding is the RISC-V raison d'etre is rather avoidance of patented/copywritten designs.

musicale15 hours ago

As you indicate, MIPS was widely used in computer architecture courses and textbooks, including pre-RISC-V editions of Patterson & Hennessy (Computer Organization & Design) and Harris & Harris (Digital Design and Computer Architecture.

In spite of the currently mediocre RISC-V implementations, RISC-V seems to have more of a future and isn't clouded by ISA IP issues, as you note.

adgjlsfhk1a day ago

the avoidance of patent/copyright is critical for (legally) having students design their own chips. MIPS was pretty good (and widely used) for teaching assembly, but pretty bad for teaching a class where students design chips

musicale15 hours ago

This is largely contradicted by the (pre RISC-V) MIPS editions of Patterson & Hennessy, Harris & Harris, etc., which teach you how to design a MIPS datapath (at the gate level.)

Regarding silicon implementations, consider that 1) you can synthesize it from HDL/RTL designs using modern CAD tools, and 2) MIPS was originally designed to be simple enough for grad students to implement with the primitive CAD tools of the 1980s (basically semi-manual layout).

userbinator16 hours ago

MIPS patents have long expired too (and incidentally for any other CPU released prior to 2006), so that's a moot point.

Joker_vD6 hours ago

> This is primarily because core is primarily a teaching ISA.

That doesn't necessarily make it all that great for industrial use, does it?

> One of the best parts about RiscV is that you can teach a freshman level architecture class or a senior level chip building project with an ISA that is actually used.

You can also do that with Intel MCS-51 (aka 8051) or even i960. And again, having an ISA easily implementable "on a knee" by a fresh graduate doesn't says anything about its other technical merits other than being "easily implementable (when done in the most primitive way possible)".

fidotrona day ago

The fact the Hazard3 designer ended up creating an extension to resolve related oddities was kind of astonishing.

Why did it fall to them to do it? Impressive that he did, but it shouldn't have been necessary.

rllja day ago

Which extension is that?

mjmasa day ago

An extension he calls Xh3bextm. For extracting multiple bits from bitfields.

https://wren.wtf/hazard3/doc/#extension-xh3bextm-section

There are also four other custom extensions implemented.

mort9612 hours ago

Do you typically care about portability to the degree that you want the same machine code to execute on both a Linux box and a microcontroller? Why?

torginus10 hours ago

Unaligned load/store is a horrible feature to implement.

Page size can be easily extended down the line without breaking changes.

direwolf2015 hours ago

The first one is common across many architectures, including ARM, and the second is just LLVM developers not understanding how cmpxchg works

fidotrona day ago

> RISC-V will get there, eventually.

Not trolling: I legitimately don't see why this is assumed to be true. It is one of those things that is true only once it has been achieved. Otherwise we would be able to create super high performance Sparc or SuperH processors, and we don't.

As you note, Arm once was fast, then slow, then fast. RISC-V has never actually been fast. It has enabled surprisingly good implementations by small numbers of people, but competing at the high end (mobile, desktop or server) it is not.

lizknope21 hours ago

I think the bigger question is does RISC-V need to be fast? Who wants to make it fast?

I'm a chip designer and I see people using RISC-V as small processor cores for things like PCIE link training or various bookkeeping tasks. These don't need to be fast, they need to be small and low power which means they will be relatively slow.

Most people on tech review sites only care about desktop / laptop / server performance. They may know about some of the ARM Cortex A series CPUs that have MMUs and can run desktop or smartphone Linux versions.

They generally don't care about the ARM Cortex M or R versions for embedded and real time use. Those are the areas where you don't need high performance and where RISC-V is already replacing ARM.

EDIT:

I'll add that there are companies that COULD make a fast RISC-V implementation.

Intel, AMD, Apple, Qualcomm, or Nvidia could redirect their existing teams to design a high performance RISC-V CPU. But why should they? They are heavily invested in their existing x86 and ARM CPU lines. Amazon and Google are using licensed ARM cores in their server CPUs.

What is the incentive for any of them to make a high performance RISC-V CPU? The only reason I can think of is that Softbank keeps raising ARM licensing costs and it gets high enough that it is more profitable to hire a team and design your own RISC-V CPU.

adgjlsfhk120 hours ago

Of your list, Qualcomm and Nvidia are fairly likely to make high perf Riscv cpus. Qualcomm because Arm sued them to try and stop them from designing their own arm chips without paying a lot more money, and Nvidia because they already have a lot of teams making riscv chips, so it seems likely that they will try to unify on the one that doesn't require licensing.

lizknope19 hours ago

Yeah, they could but then what is the market? Qualcomm wants to sell smartphone chips and Android can run on RISC-V and most Android Java apps could in theory run.

But if you look at the Intel x86 smartphone chips from about 10 years ago they had to make an ARM to x86 emulator because even the Java apps contained native ARM instructions for performance reasons.

Qualcomm is trying to push their ARM Snapdragon chips in Windows laptops but I don't think they are selling well.

Nvidia could also make RISC-V based chips but where would they go? Nvidia is moving further away from the consumer space to the data center space. So even if Nvidia made a really fast RISC-V CPU it would probably be for the server / data center market and they may not even sell it to ordinary consumers.

Or if they did it could be like the Ampere ARM chips for servers. Yeah you can buy one as an ordinary consumer but they were in the $4,000 range last time I looked. How many people are going to buy that?

adgjlsfhk117 hours ago

> Qualcomm is trying to push their ARM Snapdragon chips in Windows laptops but I don't think they are selling well.

That definitely seems to be the case. I think they likely would have more luck with Riscv phones (much less app brand loyalty). or servers (arm in the server has done a lot better than on windows)

For Nvidia, if they made a consumer riscv cpu it would be a gaming handheld/console (Switch 3 or similar) once the AI bubble pops. Before that, likely would be server cpus that cost $10k for big AI systems. Before that, I could see them expanding the role of Riscv in their GPUs (likely not visible to to users).

lizknope17 hours ago

Many PC hardware enthusiasts say they want a RISC-V or ARM CPU but then when these system exist they don't actually want them.

Why? Because they want something like a $300 CPU and $150 motherboard using standard DDR4/5 DIMMs that is RISC-V or ARM or something not x86 but is faster than x86. The sub $1000 systems that hardware companies make that are RISC-V or ARM chips are low end embedded single board systems that are too slow for these people. The really fast systems are $4000 server level chips that they can't afford. The only company really bringing fast non-x86 CPUs with consumer level pricing is Apple. We can also include Qualcomm but I'm skeptical of the software infrastructure and compatibility since they are relying on x86 emulation for windows.

benced15 hours ago

China is likely where it would come from - ARM and x86 are owned by Western companies.

fidotron8 hours ago

> I think the bigger question is does RISC-V need to be fast? Who wants to make it fast?

Honestly, the initial reaction is it sounds like cope, and I know this because I've been saying it for ages to angry reactions. RISC-V looks for all the world like it is designed for competing with the 32 bit Arm ecosystem but that the designers didn't, and still don't, understand what 64 bit Arm is about.

Secondly, it's been necessary to claim such things are forever on the way in order to maintain hype and get software support. Without it you wouldn't see nearly so much Linux buildchain work. (See the open source SuperH implementations for what happens if you admit you don't go for high performance).

Finally though, as process nodes get smaller you can afford to put much more complex blocks in the same area, which can then burst through a series of operations and power off again, many times a second. (Edit to add: of course you know that, but it's still counter intuitive the extent to which it changes things over time. People have things like floating point support in places that not too long ago would have been completely minimalist, and there are some really extreme examples around).

> I'll add that there are companies that COULD make a fast RISC-V implementation.

Again, there is no proof of this until it actually happens. When Qualcomm were trying they wanted to change the spec of RISC-V, and I strongly suspect that is actually necessary.

rwmja day ago

RISC-V doesn't have the pitfalls of Sparc (register windows, branch delay slots), largely because we learned from that. It's in fact a very "boring" architecture. There's no one that expects it'll be hard to optimize for. There are at least 2 designs that have taped out in small runs and have high end performance.

adrian_ba day ago

RISC-V does not have the pitfalls of experimental ISAs from 45 years ago, but it has other pitfalls that have not existed in almost any ISA since the first vacuum-tube computers, like the lack of means for integer overflow detection and the lack of indexed addressing.

Especially the lack of integer overflow detection is a choice of great stupidity, for which there exists no excuse.

Detecting integer overflow in hardware is extremely cheap, its cost is absolutely negligible. On the other hand, detecting integer overflow in software is extremely expensive, increasing both the program size and the execution time considerably, because each arithmetic operation must be replaced by multiple operations.

Because of the unacceptable cost, normal RISC-V programs choose to ignore the risk of overflows, which makes them unreliable.

The highest performance implementations of RISC-V from previous years were forced to introduce custom extensions for indexed addressing, but those used inefficient encodings, because something like indexed addressing must be in the base ISA, not in an extension.

hackyhackya day ago

> On the other hand, detecting integer overflow in software is extremely expensive, increasing both the program size and the execution time considerably,

Most languages don't care about integer overflow. Your typical C program will happily wrap around.

If I really want to detect overflow, I can do this:

    add t0, a0, a1
    blt t0, a0, overflow
Which is one more instruction, which is not great, not terrible.

sitharus20 hours ago

Because the other commenter wasn’t posting the actual answer, I went to find the documentation about checking for integer overflow and it’s right here https://docs.riscv.org/reference/isa/unpriv/rv32.html#2-1-4-...

And what did I find? Yep that code is right from the manual for unsigned integer overflow.

For signed addition if you know one of the signs (eg it’s a compile time constant) the manual says

  addi t0, t1, +imm
  blt t0, t1, overflow
But the general case for signed addition if you need to check for overflow and don’t have knowledge of the signs

  add t0, t1, t2
  slti t3, t2, 0
  slt t4, t0, t1
  bne t3, t4, overflow
From what I’ve read most native compiled code doesn’t really check for overflows in optimised builds, but this is more of an issue for JavaScript et al where they may detect the overflow and switch the underlying type? I’m definitely no expert on this.

sitharus18 hours ago

A bit more reading shows there's a three instruction general case version for 32-bit additions on the 64-bit RISC-V ISA. I'm not familiar with RISC-V assembly and they didn't provide an example, but I _think_ it's as easy as this since 64-bit add wouldn't match the 32-bit overflowed add.

  add t0, t1, t2
  addw t3, t1, t2
  bne t0, t3, overflow

userbinator15 hours ago

Contrast with x86:

    add eax, ecx
    jo overflow

rwmj9 hours ago

Neither x86-64 nor RISC-V is implemented by running each single instruction. They both recognize patterns in the code and translate those into micro-ops. On high performance chips like Rivos's (now Meta's) I doubt there'd be any difference in the amount of work done.

Code size is a benefit for x86-64 however - no one is arguing that - but you have to trade that against the difficulty of instruction decoding.

adrian_b21 hours ago

That is not the correct way to test for integer overflow.

The correct sequence of instructions is given in the RISC-V documentation and it needs more instructions.

"Integer overflow" means "overflow in operations with signed integers". It does not mean "overflow in operations with non-negative integers". The latter is normally referred as "carry".

The 2 instructions given above detect carry, not overflow.

Carry is needed for multi-word operations, and these are also painful on RISC-V, but overflow detection is required much more frequently, i.e. it is needed at any arithmetic operation, unless it can be proven by static program analysis that overflow is impossible at that operation.

brohee7 hours ago

It's one more instruction only if you don't fuse those instructions in the decoder stage, but as the pattern is the one expected to be generated by compilers, implementations that care about performance are expected to fuse them.

refulgentis21 hours ago

I have no idea or practical experience with anything this low-level, so idk how much following matters, it's just someone from the crowd offering unvarnished impressions:

It's easy to believe you're replying to something that has an element of hyperbole.

It's hard to believe "just do 2x as many instructions" and "ehhh who cares [i.e. your typical C program doesn't check for overflow]", coupled to a seemingly self-conscious repetition of a quip from the television series Chernobyl that is meant to reference sticking your head in the sand, retire the issue from discussion.

adrian_b21 hours ago

There was no hyperbole in what I have said.

The sequence of instructions given above is incorrect, it does not detect integer overflow (i.e. signed integer overflow). It detects carry, which is something else.

The correct sequence, which can be found in the official RISC-V documentation, requires more instructions.

Not checking for overflow in C programs is a serious mistake. All decent C compilers have compilation options for enabling checking for overflow. Such options should always be used, with the exception of the functions that have been analyzed carefully by the programmer and the conclusion has been that integer overflow cannot happen.

For example with operations involving counters or indices, overflow cannot normally happen, so in such places overflow checking may be disabled.

adgjlsfhk121 hours ago

> On the other hand, detecting integer overflow in software is extremely expensive

this just isn't true. both addition and multiplication can check for overflow in <2 instructions.

nine_k20 hours ago

Fewer than two is exactly one instruction. Which?

adgjlsfhk119 hours ago

dammmit I meant <=2. https://godbolt.org/z/4WxeW58Pc sltu or snez for add/multiply respectively.

kbolino4 hours ago

This result is misleading.

First, the code claims to be returning "unsigned long" from each of these functions, but the value will only ever be 0 or 1 (see [1]). The code is actually throwing away the result and just returning whether overflow occurred. If we take unsigned long *c as another argument to the function, so that we actually keep the result, we end up having to issue an extra instruction for multiplication (see [2]; I'm ignoring the sd instruction since it is simply there to dereference the *c pointer and wouldn't exist if the function got inlined).

Second, this is just unsigned overflow detection. If we do signed overflow detection, now we're up to 5 instructions for add and mul (see [3]). Considering that this is the bigger challenge, it compares quite unfavorably to architectures where this is just 2 instructions: the operation itself and a branch against a condition flag.

[1]: https://gcc.gnu.org/onlinedocs/gcc/Integer-Overflow-Builtins...

[2]: https://godbolt.org/z/7rWWv57nx

[3]: https://godbolt.org/z/PnzKaz4x5

adgjlsfhk13 hours ago

That's fair. The good news is that for signed overflow, you can claw back to the cost of unsigned overflow if you know the sign of either argument (which is fairly common).

kbolino3 hours ago

Yeah, it's not the end of the world, and as others mentioned, a good implementation can recognize the instruction pattern and optimize for it.

It's just a bizarre design choice. I understand wanting to get rid of condition flags, but not replacing them with nothing at all.

EDIT: It seems the same choice was made by MIPS, which is a clear inspiration for RISC-V.

adgjlsfhk13 hours ago

The argument is that there are actually 3 distinct forms of replacement:

1. 64 bit signed math is a lot less overflow vulnerable than the 16/32 bit math that was extremely common 20 years ago

2. For the BigInt use-case, the Riscv design is pretty sensible since you want the top bits, not just presence of overflow

3. You can do integer operations on the FPU (using the inexact flag for detecting if rounding occurred).

4. Adding overflow detecting instructions can easily be done in an extension in the future if desired.

kbolino2 hours ago

I think in the case of MIPS, at least, the decision logic was simply: condition flags behave like an implicit register, making the use of that register explicit would complicate the instruction encoding, and that complication would be for little benefit since most compilers ignore flags anyway, except for situations which could be replaced with direct tests on the result(s).

[deleted]21 hours agocollapsed

adrian_b21 hours ago

[flagged]

burntoutgray19 hours ago

+1 -- misinformation is best corrected quickly. If not, AI will propagate it and many will believe the erroneous information. I guess that would be viral hallucinations.

bigstrat20033 hours ago

One can quickly correct misinformation without being rude. It's not hard, and does not lessen the impact of the correction to do so. There's no reason to tolerate the kind of rudeness the parent post exhibits.

classichasclassa day ago

As a counterexample, I point to another relatively boring RISC, PA-RISC. It took off not (just) because the architecture was straightforward, but because HP poured cash into making it quick, and PA-RISC continued to be a very competitive architecture until the mass insanity of Itanic arrived. I don't see RISC-V vendors making that level of investment, either because they won't (selling to cheap markets) or can't (no capacity or funding), and a cynical take would say they hide them behind NDAs so no one can look behind the curtain.

I know this is a very negative take. I don't try to hide my pro-Power ISA bias, but that doesn't mean I wouldn't like another choice. So far, however, I've been repeatedly disappointed by RISC-V. It's always "five or six years" from getting there.

adrian_b20 hours ago

I would not call PA-RISC boring. Already at launch there was no doubt that it is a better ISA than SPARC or MIPS, and later it was improved. At the time when PA-RISC 2.0 was replaced by Itanium it was not at all clear which of the 2 ISAs is better. The later failures to design high-performance Itanium CPUs make plausible that if HP would have kept PA-RISC 2.0 they might have had more competitive CPUs than with Itanium.

SPARC (formerly called Berkeley RISC) and MIPS were pioneers that experimented with various features or lack of features, but they were inferior from many points of view to the earlier IBM 801.

The RISC ISAs developed later, including ARM, HP PA-RISC and IBM POWER, have avoided some of the mistakes of SPARC and MIPS, while also taking some features from IBM 801 (e.g. its addressing modes), so they were better.

burntoutgray19 hours ago

ISAs fail to gain traction when the sufficiently smart compilers don't eventuate.

The x86-64 is a dog's breakfast of features. But due to its widespread use, compiler writers make the effort to create compilers that optimize for its quirks.

Itanium hardware designers were expecting the compiler writers to cater for its unique design. Intel is a semi company. As good as some of their compilers are, internally they invested more in their biggest seller and the Itanium never got the level of support that was anticipated at the outset.

pjmlp13 hours ago

I am a firm believer that if AMD wasn't in the position to be able to come up with AMD64 architecture, eventually those Itanium issues would have been sorted out, Windows XP was already there and there was no other way for 64 bit going forward.

[deleted]4 hours agocollapsed

imtringued10 hours ago

I don't know anything about Itanium in particular, but AMD's NPU uses a VLIW architecture and they had to break backwards compatibility in the ISA for the second generation NPU (XDNA2) to get better performance.

classichasclass19 hours ago

I mean "boring" in the sense that its ISA was relatively straightforward, no performance-entangling kinks like delay slots, a good set of typical non-windowed GPRs, no wild or exotic operations. And POWER/PowerPC and PA-RISC weren't a lot later than SPARC or MIPS, either.

fidotrona day ago

> RISC-V doesn't have the pitfalls of Sparc (register windows, branch delay slots),

You're saying ISA design does have implementation performance implications then? ;)

> There's no one that expects it'll be hard to optimize for

[Raises hand]

> There are at least 2 designs that have taped out in small runs and have high end performance.

Are these public?

Edit: I should add, I'm well aware of the cultural mismatch between HN and the semi industry, and have been caught in it more than a few times, but I also know the semi industry well enough to not trust anything they say. (Everything from well meaning but optimistic through to outright malicious depending on the company).

rwmja day ago

The 2 designs I'm thinking of are (tiresomely) under NDA, although I'm sure others will be able to say what they are. Last November I had a sample of one of them in my hand and played with the silicon at their labs, running a bunch of AI workloads. They didn't let me take notes or photographs.

> There's no one that expects it'll be hard to optimize for

No one who is an expert in the field, and we (at Red Hat) talk to them routinely.

saagarjha10 hours ago

Expert here, are these made for general purpose workloads or do you expect them to be fast for AI only?

mastax16 hours ago

I assume the TensTorrent TT-Ascalon is one of the CPU designs.

gt0a day ago

I don't think anybody suggests Oracle couldn't make faster SPARC processors, it's just that development of SPARC ended almost 10 years ago. At the time SPARC was abandoned, it was very competitive.

twoodfin20 hours ago

In single-threaded performance? That’s not how I remember it: Sun was pushing parallel throughput over everything else, with designs like the T-Series & Rock.

gt019 hours ago

Perhaps not single thread, but Rock was a dead end a while before Oracle pulled the plug, and Sun/Oracle's core market of course was always servers not workstations. We used Niagara machines at my work around the T2 era, a long time ago, but they were very competitive if you could saturate the cores and had the RAM to back it up.

twoodfin18 hours ago

Sure, my work got a few of the Niagaras too and they were tremendous build machines for Solaris software.

But if you’re judging an ISA by performance scalability, you generally want to look at single-threaded performance.

icedchai18 hours ago

Sparc stopped being competitive in the early 2000’s.

Findecanor20 hours ago

Because today, getting a fast CPU out it isn't as much an engineering issue as it is about getting the investment for hiring a world-class fab.

The most promising RISC-V companies today have not set out to compete directly with Intel, AMD, Apple or Samsung, but are targeting a niche such as AI, HPC and/or high-end embedded such as automotive.

And you can bet that Qualcomm has RISC-V designs in-house, but only making ARM chips right now because ARM is where the market for smartphone and desktop SoCs is. Once Google starts allowing RVA23 on Android / ChromeOS, the flood gates will open.

adgjlsfhk120 hours ago

It's very much both. You need millions of dollars for the fab, but you also need ~5 years to get 3 generations of cpus out (to fix all the performance bugs you find in the first two)

snvzz18 hours ago

Fast, RVA23-compatible microarchitectures already exist. Everything high performance seems to be based on RVA23, which is the current application profile and comparable to ARMv9 and x86-64v4.

However, it takes time from microarchitecture to chips, and from chips to products on shelves.

The very first RVA23-compatible chips to show up will likely be the spacemiT K3 SoC, due in development boards April (i.e. next month).

More of them, more performant, such as a development board with the Tenstorrent Ascalon CPU in the form of the Atlantis SoC, which was tapped out recently, are coming this summer.

It is even possible such designs will show up in products aimed at the general public within the present year.

Dwedita day ago

There's the ARM video from LowSpecGamer, where they talk about how they forgot to connect power to the chip, and it was still executing code anyway. According to Steve Furber, the chip was accidentally being powered from the protection diodes alone. So ARM was incredibly power efficient from the very beginning.

rwmja day ago

Marcin is working with us on RISC-V enablement for Fedora and RHEL, he's well aware of the problem with current implementations. We're hopeful that this'll be pretty much resolved by the end of the year.

LeFantome20 hours ago

If he expects it to be resolved by the end of the year (and I agree it likely will be), why is he writing a post like this?

Is this because Fedora 44 is going to beta?

haerwu8 hours ago

Because I can.

Is it good enough answer?

cogman10a day ago

> AND the software with no architecture-specific optimisations

The optimizations that'd be applied to ARM and MIPS would be equally applicable to RISC-V. I do not believe this is a lack of software optimization issue.

We are well past the days where hand written assembly gives much benefit, and modern compilers like gcc and llvm do nearly identical work right up until it comes to instruction emissions (including determining where SIMD instructions could be placed).

Unless these chips have very very weird performance characteristics (like the weirdness around x86's lea instruction being used for arithmetic) there's just not going to be a lot of missed heuristics.

hrmtst93837a day ago

One thing compilers still struggle with is exploiting weird microarchitectural quirks or timing behaviors that aren't obvious from the ISA spec, especially with memory, cache and pipeline tuning. If a new RISC-V core doesn't expose the same prefetching tricks or has odd branch prediction you won't get parity just by porting the same backend. If you want peak numbers sometimes you do still need to tune libraries or even sprinkle in a bit of inline asm despite all the "let the compiler handle it" dogma.

cogman10a day ago

While true, it's typically not going to be impactful on system performance.

There's a reason, for example, why the linux distros all target a generic x86 architecture rather than a specific architecture.

spockza day ago

Not all. CachyOS has specific builds for v3, v4, and AMD Zen4/5: https://wiki.cachyos.org/features/optimized_repos/

thesuperbigfrog20 hours ago

Ubuntu recently added a more specific target for AMD64v3:

https://discourse.ubuntu.com/t/introducing-architecture-vari...

adrian_ba day ago

Some applications may target a generic x86 architecture without any impact on performance.

However, other applications which must do cryptographic operations, audio/video processing, scientific/technical/engineering computing, etc. may have wildly different performances when compiled for different x86-64 ISA versions, for which dedicated assembly-language functions exist.

cogman1021 hours ago

Granted, these applications do exist. They are simply becoming more and more rare. I'd also say that there's been a pretty steady dedicated effort to abstracting the assembly. It's still pretty low level, as in you are caring about the specific instructions being used, but it's also not quite assembly in both C++/rust.

Java, interestingly enough, is somewhat leading the way here with their Vector API. I think they actually have one of the better setups for allowing someone to write fast code that is platform independent.

C++ is also diving into this realm. 26 just merged in now SIMD instructions.

That is the bulk of the benefit of diving down into assembly.

https://en.cppreference.com/w/cpp/numeric/simd.html

adrian_b20 hours ago

I would not say that such applications are becoming more and more rare.

Most of the applications whose performance matters for me, because I must wait a non-negligible time for them to do their job, are dependent on assembly implementation for certain functions invoked inside critical loops. I do not see any sign of replacements for them. On the contrary, Intel, AMD and Arm continue to introduce special instructions that are useful in certain niche applications and taking advantage of them will require additional assembly language functions, not less.

For me, there is only one application that I use and which consumes non-negligible computer time and which does not depend on SIMD optimizations, which is the compilation of software projects.

CyberDildonics18 hours ago

audio/video processing, scientific/technical/engineering computing, etc. may have wildly different performances when compiled for different x86-64 ISA versions

This is pretty vague and makes it sounds like there are big differences in instruction sets.

In actuality it comes down to memory access first which has nothing to with instructions.

After that it comes down to simple SIMD/AVX instructions and not some exotic entirely different instruction set.

CyberDildonics18 hours ago

The things you are talking about are taken care of by out of order execution and the CPU itself being smart about how it executes. Putting in prefetch instructions rarely beats the actual prefetcher itself. Compilers didn't end up generating perfect pentium asm either. OOO execution is what changed the game in not needing perfect compiler output any more.

bobmcnamaraa day ago

> The optimizations that'd be applied to ARM and MIPS would be equally applicable to RISC-V.

There's no carry bit, and no widening multiply(or MAC)

Findecanor20 hours ago

RISC-V splits widening multiply out into two instructions: one for the high bits and one for the low. Just like 64-bit ARM does.

Integer MAC doesn't exist, and is also hindered by a design decision not to require more than two source operands, so as to allow simple implementations to stay simple. The same reason also prevents RISC-V from having a true conditional move instruction: there is one but the second operand is hard-coded zero.

FMAC exists, but only because it is in the IEEE 754 spec ... and it requires significant op-code space.

izacus8 hours ago

If you make a spec that the wider industry cannot effectively implement into quality products, it's the spec that's wrong. And that's true for anything - whether it's RISC-V, ipv6, Matter, USB-C and so on.

That's what makes writing specs hard - you need people who understand implementation challenges at the table, not dreaming architects and academics.

bsder21 hours ago

> Don't blame the ISA - blame the silicon implementations

That's true, but tautological.

The issue is that the RISC-V core is the easy part of the problem, and nobody seems to even be able to generate a chip that gets that right without weirdness and quirks.

The more fundamental technical problem is that things like the cache organization and DDR interface and PCI interface and ... cannot just be synthesized. They require analog/RF VLSI designers doing things like clock forwarding and signal integrity analysis. If you get them wrong, your performance tanks, and, so far, everybody has gotten them wrong in various ways.

The business problem is the fact that everybody wants to be the "performance" RISC-V vendor, but nobody wants to be the "embedded" RISC-V vendor. This is a problem because practically anybody who is willing to cough up for a "performance" processor is almost completely insensitive to any cost premium that ARM demands. The embedded space is hugely sensitive to cost, but nobody is willing to step into it because that requires that you do icky ecosystem things like marketing, software, debugging tools, inventory distribution, etc.

This leads to the US business problem which is the fact that everybody wants to be an IP vendor and nobody wants to ship a damn chip. Consequently, if I want actual RISC-V hardware, I'm stuck dealing with Chinese vendors of various levels of dodginess.

apia day ago

A pattern I've noticed for a very long time:

A lot of times the path to the highest performing CPU seems to be to optimize for power first, then speed, then repeat. That's because power and heat are a major design constraint that limits speed.

I first noticed this way back with the Pentium 4 "Netburst" architecture vs. the smaller x86 cores that became the ancestor of the Core architecture. Intel eventually ran into a wall with P4 and then branched high performance cores off those lower-power ones and that's what gave us the venerable Core architecture that made Intel the dominant CPU maker for over a decade.

ARM's history is another example.

cpgxiiia day ago

I think the story is a bit more complicated. Core succeeded precisely because Intel had both the low-power experience with Pentium-M and the high-power experience with Netburst. The P4 architecture told them a lot about what was and wasn't viable and at what complexity. When you look at the successor generations from Core, what you see are a lot of more complex P4-like features being re-added, but with the benefits of improved microarch and fab processes. Obviously we will never know, but I don't think you would get to Haswell or Skylake in the form they were without the learning experience of the P4.

In comparison, I think Arm is actually a very strong cautionary tale that focusing on power will not get you to performance. Arm processors remained pretty poor performance until designers from other CPU families entirely (PowerPC and Intel) took it on at Apple and basically dragged Arm to the performance level they are today.

fidotron7 hours ago

> In comparison, I think Arm is actually a very strong cautionary tale that focusing on power will not get you to performance.

Hugely underappreciated. Someone involved fully understood that "you don't get to the moon by climbing progressively taller trees".

The other two times Arm had great performance were the StrongArm, when it was implemented by DEC people off the Alpha project, and the initial ones, which were quite esoteric and unusually suited to the situation of the late 80s.

maximilianburke19 hours ago

And not just any PowerPC architects either, but the people from PA Semi. Motorola couldn't get the speed up and IBM couldn't get the power down.

userbinator15 hours ago

NetBurst was supposed to be the application of RISC principles to x86 taken to its extreme (ultra-long pipelines to reduce clock-to-clock delay, highest clock speed possible --- basically reducing work-per-clock and hoping that reduces complexity enough to increase clock speed to compensate.) The ALU was 16 bits, "double pumped" with the carry split between the two, which lead to 32-bit ALU operations that don't carry between the lower and upper halves actually finishing a clock cycle faster than those with a carry.

https://stackoverflow.com/questions/45066299/was-there-a-p4-...

jnoveka day ago

I don’t have a micro architecture background so I apologize if this is obvious — What do power and speed mean in this context?

McPa day ago

Power - how many Watts does it need? Speed - how quickly can it perform operations?

wmf21 hours ago

You can get low power with a simple design at a low clock. This definitely will not help achieve high performance later.

weebull21 hours ago

Clock rate isn't the only factor. A design can be power hungry at a low clock rate if designed badly, and if it it is... you're never getting that think running fast.

unethical_bana day ago

One could say "Optimize for efficiency first, then performance".

cptskippya day ago

Core evolved from the Banis (Centrino) CPU core which was based on P3, not P4. Banias used the front-side bus from P4 but not the cores.

Banias was hyper optimized for power, the mantra was to get done quickly and go to sleep to save power. Somewhere along the line someone said "hey what happens if we don't go to sleep?" and Core was born.

jauntywundrkinda day ago

Parallels to code design, where optimizing data or code size can end up having fantastic performance benefits (sometimes).

dmitrygra day ago

IF you care to read the article, they indeed do not blame the architecture but the available silicon implementations.

rbanffya day ago

I did read it. A Banana Pi is not the fastest developer platform. The title is misleading.

BTW, it's quite impressive how the s390x is so fast per core compared to the others. I mean, of course it's fast - we all knew that.

And don't let IBM legal see this can be considered a published benchmark, because they are very shy about s390x performance numbers.

Aurornisa day ago

> A Banana Pi is not the fastest developer platform.

What is the current fastest platform that isn’t exorbitantly expensive? Not upcoming releases, but something I can actually buy.

I check in every 3-6 months but the situation hasn’t changed significantly yet.

adgjlsfhk121 hours ago

A P550 based board is the best you can get for now (~2-3x faster than the Banana Pi). In 2-3 months there should be a number of SpaceMIT k3 chips that are ~4-6x faster than the banana pi and somewhat reasonably priced (~200-300). By the end of the year, however, you should be able to get an ascalon chip which should be way way faster than that (roughly apple m1/zen3 speed)

cestitha day ago

What is the current fastest ppc64le implementation that isn’t exorbitantly expensive? How about the s390x?

gt0a day ago

I was really surprised by the s390x performance, but I also don't really understand why there are build time listed by architecture, not the actual processors.

kpila day ago

What's fast on Z platforms is typically IO rather than raw CPU - the platform can push a lot of parallell data. This is typically the bottleneck when compiling.

The cores are in my experience moderately fast at most. Note that there are a lot of licencing options and I think some are speed-capped - but I don't think that applies to IFL - a standard CPU licence-restricted to only run linux.

burntoutgray19 hours ago

I thought I read somewhere that Z CPUs run at 5GHz ??

rbanffya day ago

Probably because that's just the infrastructure they have.

pantalaimona day ago

i686 builds even faster

menaerusa day ago

Which risc-v implementation is considered fast?

patchnulla day ago

Nothing shipping today is really competitive with modern ARM or x86. The SiFive P870 and Tenstorrent Ascalon (Jim Keller's team) are the most anticipated high-performance designs, but neither is widely available. What you can actually buy today tops out around Cortex-A76 class single-thread performance at best, which is roughly where ARM was five or six years ago.

menaerusa day ago

I remember taking down some notes wrt SiFive P870 specs, comparing them to x86_64, and reaching the same conclusion. Narrower core width (4-wide vs 8-wide), lower clock frequency (peaks at 3GHz) and no turbo (?), limited support for vector execution (128-bit vs 512-bit), limited L1 bandwidth (1x 128-bit load/cycle?), limited FP compute (2x 128-bit vs 2x 512-bit), load queue is also inconveniently small with 48 entries (affecting already limited load bandwidth), unclear system memory bandwidth and how it scales wrt the number of cores (L3 contention) although for the latter they seem to use what AMD is doing (exclusive L3 cache per chiplet).

LeFantome21 hours ago

SpacemiT K3 is about the same performance as a Rockchip RK3588. So, 4 years ago?

Except the K3 kills it on AI (60 TOPS).

LeFantome21 hours ago

> Which risc-v implementation is considered fast?

SpacemiT K3 is 2010 Macbook performance single-core, 2019 Macbook Air multi-core, and better than M4 Apple Silicon for AI.

So I guess it depends on what you are going to do with it.

menaerus13 hours ago

M4 is 38 TOPS at INT8 precision whereas SpacemiT K3 is 60 TOPS at INT4 precision so at best they would be equal in "AI" performance but they are not because the rest of the K3 chip is much less capable than M4 (as I would expect).

E.g. M4 total system memory bandwidth is 120GB/s whereas K4 is 51GB/s, single core memory bandwidth is 100-120GB/s vs ~30GB/s. M4 has 10 CPU cores and neural engine with 16 cores whereas K3 has 8 CPU cores and 8 "AI" cores, K3 clock frequency is almost half the clock frequency in M4 etc. etc.

But anyway thanks for sharing, always good to learn about new hardware.

NooneAtAll3a day ago

DC-ROMA 2 is on the Rasperry 4 level of performance last I heard

snvzz18 hours ago

>I did read it. A Banana Pi is not the fastest developer platform. The title is misleading.

Ironically, its SoC (spacemiT K1) is slower than the JH7110 used in the first mass-produced RISC-V SBC, VisionFive 2.

But unlike JH7110, it has vector 1.0, making it a very popular target.

Of course, none of these pre-RVA23 boards will be relevant anymore, once the first development boards with RVA23-compatible K3 ship next month.

These are also much faster than anything RISC-V currently purchasable. Developers have been playing with them for months through ssh access.

topspina day ago

I keep checking in on Tenstorrent every few months thinking Keller is going to rock our world... losing hope.

At this point the most likely place for truly competitive RISC-V to appear is China.

Findecanor19 hours ago

Tenstorrent is supposedly taping out 8-wide Ascalon processors as we speak, with devboards projected to be available in Q2/Q3 this year.

BTW. Keller is also on the board of AheadComputing — founded by former Intel engineers behind the fabled "Royal Core".

topspin17 hours ago

I can't know what Ascalon will actually be, but back in April/May 2025 there were actual performance numbers presented by Tenstorrent, and I analyzed what was shown. I concluded that Ascalon would be the x86_64 equivalent of an i5-9600K.

That's useable for many applications, but it's not going to change the world. A lot of "micro PCs" with low power CPUs are well past that now. If that's what Ascalon turns out to be, it will amount to an SBC class device.

imtringued9 hours ago

I don't know what bubble you are living in, but the i5-9600K is many steps up beyond "SBC class".

The Raspberry Pi 5 results on Geekbench 6 are all over the place. A score between 500 to 900 in single core and a 2000 multi core score.

Radxa 4 is an SBC based around the N100 and it basically gets the same or slightly higher performance as the Raspberry Pi 5.

Meanwhile the i5-9600K gets a score of 1677 in single core, which is 83% of the performance of the entire Raspberry Pi 5 and gets a score of 6199 when using multiple cores, that's 3x the performance.

I'd call this at least "Laptop class" and you even admitted yourself back in 2025 that you're using a processor on that level.

topspin3 hours ago

"I don't know what bubble you are living in"

My bubble includes a number of SBCs and embedded boards from Advantech, frequently using Ryzen embedded (V1000 class) CPUs.

SBC is too vague I suppose. Past the Raspberry Pi form factor SBC class, there are many* SBC vendors with Core i5-1340P and similar CPUs today. That's a 2023 device, and just past a 2018 i5-9600K, aligning well with what I claimed.

In 2025+, such a CPU is not a desktop class device, and is sufficient only in low cost laptops (but in much lower power form.) A MacBook Neo A18, for example, is considerably better than a i5-9600K.

It would be great if Tentorrent actually yields such a product, and if, based on later performance projections that appeared in late 2025, Ascalon is actually faster, but, as I said, the world will not change much. RISC-V developers will appreciate compiling like it's 2019, but that's as far as it will go.

* LattePanda Sigma, ASROCK NUC, DFROBOT, Premio and many NAS and industrial devices.

snvzz18 hours ago

>Ascalon tape out

Supposedly happened earlier this year. Tenstorrent says devboards in Q3.

Now we just wait.

rbanffya day ago

> At this point the most likely place for fast RISC-V to appear is China.

Or we just adopt Loongson.

balou23a day ago

TBH I still don't really get how it's different from MIPS. As far as I can tell... Loongson seems to be really just MIPS, while LoongArch is MIPS with some extra instructions.

pantalaimona day ago

They did get rid of the delay slots and some other MIPS oddities

bonzinia day ago

LoongArch is, on a first approximation, an almost RISC-V user space instruction set together with MIPS-like privileged instructions and registers.

mananaysiempre20 hours ago

Wait, this is a modern-ish ISA with a software-managed TLB, I didn’t realize that! The manual seems a bit unhappy about that part though:

> In the current version of this architecture specification, TLB refill and consistent maintenance between TLB and page tables are still [sic] all led by software.

https://loongson.github.io/LoongArch-Documentation/LoongArch...

bonzini12 hours ago

I think they have already added hardware page table walks.

https://lwn.net/Articles/932048/

mananaysiemprea day ago

But legally distinct! I guess calling it M○PS was not enough for plausible deniability.

genxya day ago

ISAs shouldn't be patentable in the first place.

throawayonthea day ago

(purely on vibes) loongson feels to me like an intermediate step/backup strategy rather than a longterm target (though they'll probably power govt equipment for decades of legacy either way :p)

trompa day ago

But they didn't reflect that in a title like "current RISC-V silicon Is Sloooow" ...

spidericea day ago

Then how do you justify the title?

userbinator16 hours ago

ARM was never a "speed demon"; it started out as a low power small-area core and clearly had more complexity and thought put into it than MIPS or RISC-V.

Over a decade ago: https://news.ycombinator.com/item?id=8235120

RISC-V will get there, eventually.

Strong doubt. Those of us who were around in the 90s might remember how much hype there was with MIPS.

rbanffy11 hours ago

I don’t think you remember, But the first Archimedes smoked the just-launched Compaq 386s with a dedicated 387 coprocessor.

It was not designed to be one, but it ended up being surprisingly fast.

crest21 hours ago

RISC-V lacks a bunch of really useful relatively easy to implement instructions and most extensions are truly optional so you can't rely on them. That's the problem if you let a bunch of academics turn your ISA into a paper mill.

In theory you can spend a lot of effort to make a flawed ISA perform, but it will be neither easy nor pretty e.g. real world Linux distros can't distribute optimised packages for every uarch from dual-issue in-order RV64GC to 8-wide OoO RV64 with all the bells and whistles. Only in (deeply) embedded systems can you retarget the toolchain and optimise for each damn architecture subset you encounter.

kashyapc19 hours ago

Arm had 40 years to be where it is today. RISC-V is 15 years old. Some more patience is warranted.

Assuming they will keep their word, later this year Tenstorrent is supposed to ship their RVA23-based server development platform[1]. They announced[2] it at the last year's NA RISC-V Summit. Let's see.

The ball is in the court of hardware vendors to cook some high-end silicon.

[1] https://tenstorrent.com/ip/risc-v-cpu

[2] https://static.sched.com/hosted_files/riscvsummit2025/e2/Unl...

userbinator16 hours ago

MIPS, which RISC-V is closely modeled after, is also roughly 4 decades old and was massively hyped in the early 90s as well.

kashyapc8 hours ago

Great point; I only know about MIPS legacy vaguely. As you imply, don't listen to the "hype-sters" but pay attention to what silicon is being produced.

saati4 hours ago

Aarch64 is just 15 years old, and shares pretty much nothing with 32 bit arms apart from the name.

Levitatinga day ago

This is why felix has been building the risc-v archlinux repositories[1] using the Milk-V Pioneer.

I think the ban of SOPHGO is part to blame for the slow development.[2] They had the most performant and interesting SOCs. I had a bunch of pre-orders for the Milk-V Oasis before it was cancelled. It was supposed to come out a while ago, using the SG2380, supposedly much more performant than the Milk-V Titan mentioned in the article (which still isn't out).

It was also SOPHGO's SOCs that powered the crazy cheap/performant/versatile Milk-V DUO boards. They have the ability to switch ARM/RISC-V architecture.

[1]: https://archriscv.felixc.at/

[2]: https://www.tomshardware.com/tech-industry/artificial-intell...

1515521 hours ago

Can you articulate why you think this ban impacted anything and what you think the ban applies to?

Levitating10 hours ago

I won't pretend to understand the geo-politics or rulings.

What I do know is since the ban, all ongoing products featuring SOPHGO SOCs were cancelled, and I haven't seen any products featuring them since. The SOPHGO forums have also closed down.

The Milk-V Oasis would have had 16 cores (SG2380 w/ SiFive P670), it was replaced by the Milk-V Megrez with just 4 cores (SiFive P550) for around the same price. The new Milk-V Titan has only 8. We're slowly catching up, but the performance is now one or two years behind what it could've been.

The SG2380 would've been the first desktop ready RISC-V SOC at an affordable price. I think it's still the only SOC made that used the SiFive P670 core.

echoangle18 hours ago

Is there a simple explanation why RISC-V software has to be built on a RISC-V system? Why is it so hard for compilers to compile for a different architecture? The general structure of the target architecture lives inside the compiler code and isn’t generated by introspecting the current system, right?

haerwu7 hours ago

Cross compilation of entire distributions requires such distributions to be prepated for it. Which is not a case when you use OpenEmbedded/Yocto or Buildroot to build it. But it gets complicated with distributions which are built natively.

Fedora does not have a way to cross compile packages. The only cross compiler available in repositories is bare-metal one. You can use it to build firmware (EDK2, U-Boot) or Linux kernel. But nothing more.

Then there is the other problem: testing. What is a point of successful build if it does not work on target systems? Part of each Fedora build is running testsuite (if packaged software has any). You should not run it in QEMU so each cross-build would need to connect to target system, upload build artifacts and run tests. Overcomplicated.

Native builds allows to test is distribution ready for any kind of use. I use AArch64 desktop daily for almost a year now. But it is not "4core/16GB ram SBC" but rather "server-as-a-desktop" kind (80 cores, 128 GB ram, plenty of PCI-Express lanes). And I build software on, write blog posts, watch movies etc. And can emulate other Fedora architectures to do test builds.

Hardware architecture slow today, can be fast in the future. In 2013 building Qt4 for Fedora/AArch64 took days (we used software emulators). Now it takes 18 minutes.

boredatoms18 hours ago

Under specified build dependencies that use libraries/config on your host OS rather than the target system

You can solve this on a per language basis, but the C/C++ ecosystem is messy. So people use VMs or real hardware of the target arch to not have to think about it

flowerthoughts12 hours ago

Old compilers tended to make it a compile-time switch which backends were included, probably because backends were "huge", so they were left out. (The insn lookup table in GCC took ages to generate and compile.) And of course all development environments running on Windows assumed x86 was the only architecture.

With LLVM existing, cross-compiling is not a problem anymore, but it means you can't run tests without an emulator. So it might just be easier to do it all on the target machine.

AnssiH9 hours ago

The cross-compiler part itself is easy, but getting all the build scripting of tens of thousands of Fedora packages to work perfectly for cross-compiling would be a lot of work.

There are lots of small issues (libraries or headers not being found, wrong libraries or headers being found, build scripts trying to run the binaries they just built, wrong compiler being used, wrong flags being used, etc.) when trying to cross-compile arbitrary software.

All fixable (cross-compiling entire distributions is a thing), but a lot of work and an extra maintenance burden.

anarazel18 hours ago

Cross building of possible, but it's rather useful to be able to test the software you just built... And often enough, tests take more resources than the build.

aa-jv7 hours ago

Native builds are always a safer/more reliable path to take than cross-compiling, which usually requires solid native builds to be operational before the cross environment can be reliably trusted.

Its a bootstrapping chain of priority. Once a native build regime is set in stone, cross compiling harnesses can be built to exploit the beachhead.

I have saved many a failing projects budget and deadline by just putting the compiler onboard and obviating the hacky scaffolding usually required for reliable cross compiling at the beginning stages of a new architecture project, and I suspect this is the case here too ..

rivetfasten2 hours ago

Thanks for the post!

Question: While you would want any official arch built natively, maybe an interim stage of emulated vm builds for wip/development/unsupported architectures would still be preferable in this case?

Comparing the tradeoffs: * Packages disabled and not built because of long build times. * Packages built and automated tests run on inaccurately emulated vms (NOT cross compiled). Users can test. It might be broken.

It's an experimental arch, maybe the build cluster could be experimental too?

lifisa day ago

Or they could fix cross compilation and then compile it on a normal x86_64 server

mort9612 hours ago

Fixing cross compilation is a huge undertaking. So much software needs to be patched to be properly cross-compilable.

0verkilled3 hours ago

Unrelated to the post's point but: Why does x86 build faster than x86_64? Presumably they used the same exact hardware, or at least the exact same number of cores and memory, yet the build time is more than 10% faster in x86. Is there some sort of overhead for x86_64 that I'm not seeing?

AceJohnny219 hours ago

There was a Mastodon post some time back (~1y?) where someone realized that the fastest RISC-V hardware they could get was still slower than running it on QEMU.

That's not how it usually works :\

RISC-V is certainly spreading across niches, but performant computing is not one of them.

Edit: lol the author mentions the same! Perhaps they were the source of the original Mastodon post I'm thinking of.

Levitating10 hours ago

The Milk-V Pioneer breaks that barrier, it's expensive though. And the risc-v architecture used is now old, the company that developed is was sanctioned by the US and is now dead.

leni536a day ago

Is cross compilation out of the question?

STKFLTa day ago

I'd guess that the issue is running the `%install` and `%check` stages of the .spec file. The Python library rpy (to pull a random example from Marcin's PRs) runs rpy's pytest test suite and had to be modified to avoid running vector tests on RISC-V.

Obviously a solvable problem to split build and test but perhaps the time savings aren't worth the complexity.

https://src.fedoraproject.org/rpms/rpy/pull-request/4#reques...

leni536a day ago

Maybe the tests could be run with user-mode qemu instead of the whole thing running under qemu or on RISC-V hardware. Could possibly be more or less seamless with binfmt_misc being set up in the builders.

kashyapc19 hours ago

Near as I know, Fedora prefers native compilation for the builds.

Your question made me look up Arm's history in Fedora and came up on this 2012 LWN thread[1]. There's some discussion against cross-compilation already back then.

[1] https://lwn.net/Articles/487622/

IshKebaba day ago

It's usually an enormous pain to set up. QEMU is probably the best option.

pantalaimona day ago

T2 manages to do it

https://t2linux.com/

VorpalWay12 hours ago

Yocto, which we use at work, manages it just fine to build a whole embedded Linux distro. So I don't see why Fedora couldn't make it work if they wanted. You could even scp over the test suites to run that on native systems if you wanted.

mort9612 hours ago

Yocto manages it thanks to the tireless effort of a community of people maintaining patches and unholy hacks for a ton of software to make it cross compilable. And they have nowhere near the amount of recipes that Fedora has.

VorpalWay8 hours ago

This is true, but the hacks are mostly in the C and C++ recipes as I understand it. Something like Rust or especially Go or Zig is far easier to cross compile.

I personally found cross compiling Rust easy, as long as you don't have C dependencies. If you have C dependencies it becomes way harder.

This suggests that spending time to upstream cross compilation fixes would be worth it for everyone, and probably even in the C world, 20% of the packages need 80% of the effort.

NetMageSCW4 hours ago

I wonder how much of Fedora is written in Rust?

mort966 hours ago

I wonder if Fedora packages any C and C++ software?

STKFLTa day ago

Maybe there are issues I'm not aware of but using dockcross has made cross-compilation quite easy in my experience.

https://github.com/dockcross/dockcross

mort9611 hours ago

How does it handle .so version differences and glibc version differences between the container and the target system?

sofixaa day ago

Depends on the language, it's pretty trivial with Go.

Zambytea day ago

Unless you use CGO. I've heard people using Zig (which has great cross compilation for the Zig language as well) to cross compile C with CGO though.

IshKebaba day ago

Yes, but they're compiling binutils.

titzer7 hours ago

> ... I can build the “llvm15” package in about 4 hours. Compare that to 10.5 hours on a Banana Pi BPI-F3 builder (it may be quicker on a P550 one).

That's....slow. What a huge pile of bloat.

mrbluecoat19 hours ago

> Random mumblings of ARM developer ... RISC-V is sloooow

Old news. See also:

> Random mumblings of x86_64 developer ... ARM is sloooow

throwa3562629 hours ago

What kind or ancient arm hardware are they using here?

On a related note, SoC companies needs to get their act together and start using the latest arm cores. Even the mid range cores of 1-2 years ago show a huge leap in performance:

https://sbc.compare/56-raspberry-pi-500-plus-16gb/101-radxa-...

orangeboats8 hours ago

>What kind or ancient arm hardware are they using here?

I think that's the point being made here. ARM in the 2000s was not known to be fast, now it is.

RISC-V being slow isn't an inherent characteristic of the ISA, it only tells you about the quality of its implementations. And said implementations will only improve if corporations are throwing capitals at it (see: Apple, Qualcomm, etc.)

throwa3562626 hours ago

I think standard Arm cores are already plenty fast, the issue is the SoC vendors are still using cortex-A57 from 2015 instead of the new designs.

saghm19 hours ago

If I'm reading their chart right, they have barely half as much memory for their RISC-V machine compared to any of the others? I don't know enough to know whether it's actually bottlenecked by memory, but it's a bit odd to claim it's slower, give those numbers, and not say anything about it. I'd hope they ruled that out as the source of the discrepancy, but it's hard to tell without confirmation.

Levitating10 hours ago

I think it's mentioned clearly in the article.

> RISC-V builders have four or eight cores with 8, 16 or 32 GB of RAM (depending on a board)

> The UltraRISC UR-DP1000 SoC, present on the Milk-V Titan motherboard should improve situation a bit (and can have 64 GB ram).

RISC-V SOCs just typically don't support much ram. With the exception of the SG2042 which can take 128GB, but it's expensive, buggy and now old.

So I am sure it's a combination of low ram and low clockspeeds.

haerwu8 hours ago

I updated blog post after reading comments from Matrix/Slack/Phoronix/HN/Lobster/etc. places.

- mentioned which board had 143 minutes, added info about time on Milk-V Megrez board

- added section 'what we need hw-wise for being in fedora'

- added link to my desktop post to point that it is aarch64, not x86-64

- wording around qemu to show that I use it locally only

mkj19 hours ago

Does that page even say which RISC-V CPUs are being used that are slow? I couldn't see it, which seems a bit of pointless complaining.

Levitating10 hours ago

> RISC-V builders have four or eight cores with 8, 16 or 32 GB of RAM (depending on a board).

Which boards are used specifically should not matter much. There's not much available.

Except for the Milk-V Pioneer, which has 64 cores and 128GB ram. But that's an older architecture and it's expensive.

utopiah11 hours ago

FWIW checkout dockcross/linux-riscv32 and dockcross/linux-riscv64 if compilation itself is your problem.

I setup a CopyParty server on a headless RISC-V SBC and was a breeze. Just get the packets, do the thing, move on. Obviously depends on your need but maybe you're not using the right workflow and blame the tools instead.

cesaref9 hours ago

Just out of interest, why aren't they cross compiling RISC-V? I thought that was common practice when targeting lower performing hardware. It seems odd to me that the build cycle on the target hardware is a metric that matters.

kashyapc9 hours ago

Please skim the thread :) We've already discussed it twice. Fedora "mandates" native builds.

Build time on target hardware matters when you're re-building an entire Linux distribution (25000+ packages) every six months.

poulpy1238 hours ago

Is it slow because of the inherent design or because it's recent and not as optimised as x86 or arm ?

srotta day ago

Couldn’t be caused by a slower compiler? Fe. What would be a difference when cross compiling same code to aarch64 vs risc-v?

shmerl4 hours ago

Why not cross compile in such case on better hardware? Then run tests on the native one.

rbalinta day ago

If the builds are slow, build accelerators can help a lot. Ccache would work for sure and there is also firebuild, that can accelerate the linker phase and many other tools in builds.

sylware10 hours ago

The current hardware used is self-hosting mini-server grade, and certainly not on the latest silicon process. "Slow" is expected.

It is not the ISA, but the implementations and those horrible SDKs which needs to be adjusted for RISC-V (actually any new ISA).

RISC-V needs extremely performant implementations, that on the best silicon process, until then RISC-V _will be_ "slow".

Not to mention, RISC-V is 'standard ISA': assembly writted software is more than appropriate in many cases.

aa-jv10 hours ago

I don't care as long as it keeps my soldering iron hot.

Joel_Mckaya day ago

Any new hardware lags in compiler optimizations.

i. llvm presentation can thrash caches if setup wrong (given the plethora of RISC-V fragmented versions, most compilers won't cover every vanity silicon.)

ii. gcc is also "slow" in general, but is predictable/reliable

iii. emulation is always slower than kvm in qemu

It may seem silly, but I'd try a gcc build with -O0 flag, and a toy unit test with -S to see if the ASM is actually foobar. One may have to force the -mtune=boom flag to narrow your search. Best regards =3

yogthosa day ago

there are projects for making high performance RISC-V chips like this one https://github.com/OpenXiangShan/XiangShan

classichasclass21 hours ago

OK, I'll bite. If this is a truly competitive core - I don't claim enough personal expertise to judge - does anyone fab and sell it? There should be a business case if it is.

luyu_wu20 hours ago

If I remember correctly,it was taped out by some company as some embedded core in a GPU?

I guess that may be the true use case for 'Open-Source' cores.

That being said, the advertised SPEC2007 scores are close to a M1 in IPC.

sltkra day ago

Are you sure you are comparing apples with apples here?

The fact that i686 is 14% faster than x86_64 is a little suspicious, because usually the same software runs _faster_ on x86_64 (despite the increased memory use) thanks to a larger register set, an optimized ABI, and more vector instructions.

Of course, if you are compiling an i686 binary on i686, and an x86_64 binary on x86_64, then the compilers aren't really doing the same work, since their output is different. I'm not a compiler expert, but I could imagine that compiling x86_64 binaries is intrinsically slower than for i686 for a variety of reasons. For example, x86_64 is mostly a superset of i686, so a compiler has way more instructions to consider, including potential optimizations using e.g. SIMD instructions that don't exist on i686 at all. Or a compiler might assume a larger instruction cache size, by default, and do more unrolling or inlining when compiling for x86_64. And so on.

In that case, compiling on x86_64 is slower not because the hardware is bad but because the compiler does more work. Perhaps something similar is happening on RISC-V.

jmalicki21 hours ago

It isn't crazy uncommon to see i686 be faster - usually it means you're memory bandwidth bound.

But yeah, it may mean the benchmark is not representative.

fweimera day ago

The x86-64 build runs about 50% more linker tests than the i686 build.

[deleted]19 hours agocollapsed

andrepda day ago

There's zero mention of hardware specs or cost beyond architecture and core counts... What is the purpose of this post?

Anyway, it's hardly surprising that a young ISA with not a 1/1000th of the investment of x86 or ARM has slower chips than them x)

kashyapc19 hours ago

On benchmarks, for more precision details, I recommend the RISC-V Vector (RVV) benchmarks[1], maintained by Olaf Bernsten. He only covers the Vector stuff, but with great depth.

[1] https://camel-cdr.github.io/rvv-bench-results/

brcmthrowawaya day ago

Why is it slow? I thought we have Rivos chips

kashyapc8 hours ago

They haven't produced any chips.

rwmja day ago

Rivos was acquired by Meta last year.

IshKebaba day ago

Yeah it's a few years behind ARM, but not that many. Imagine trying to compile this on ARM 10 years ago. It would be similarly painful.

kllrnohja day ago

> Imagine trying to compile this on ARM 10 years ago

Cortex A57 is 14 years old and is significantly faster than the 9 year old Cortex A55 these RISC-V cores are being compared against.

So yes it's many years behind. Many, many years.

LeFantome21 hours ago

SpacemiT K3 is on par with Rockchip RK3588. So, about 4 years behind ARM.

Tenstorrent Atlantis (first Ascalon silicon) should ship in Q2/Q3 and be twice as fast. About as fast as Ryzen5. So, about 5 years behind AMD.

But even the K3 has faster AI than Apple Silicon or Qualcomm X Elite.

Current trend-lines suggest ARM64 and RISC-V performance parity before 2030.

ben-schaaf9 hours ago

Not sure why you're taking the rk3588 as a milestone for ARM, when it's a low end chip using core designs that were old when it released. Cortex-A76 is from 2018, so if that's the yardstick then the K3 is 8 years behind. Even then at the time the A76 was released Apple was significantly ahead with their own ARM CPUs.

HerbManic19 hours ago

I love the optimisim, but I do thimk your time line is little quick. It will be more like 10 years than 4.

kllrnohj20 hours ago

> SpacemiT K3 is on par with Rockchip RK3588. So, about 4 years behind ARM.

That'd be ~7 years behind, not 4. Cortex A76 came out in late 2018. Also what benchmarks are you looking at?

> Tenstorrent Atlantis (first Ascalon silicon) should ship in Q2/Q3 and be twice as fast. About as fast as Ryzen5. So, about 5 years behind AMD.

Which Ryzen 5? The first Ryzen 5 came out in 2017, which was a lot more than 5 years ago.

> But even the K3 has faster AI than Apple Silicon or Qualcomm X Elite.

Which isn't RISC-V. Might as well brag about a RISC-V CPU with an RTX 5090 being faster at CUDA than a Nintendo Switch. That's a coprocessor that has nothing to do with the ISA or CPU core.

> Current trend-lines suggest ARM64 and RISC-V performance parity before 2030.

L. O. fucking. L. That's not how this works. That's not how any of this works.

hackerInnena day ago

This. While I doubt that there will be a good (whatever that means) desktop risc-v CPU anytime soon, I do think that it will eventually catch up in embedded systems and special applications. Maybe even high core count servers.

It just takes time, people who believe in it and tons of money. Will see where the journey goes, but I am a big risc-v believer

NetMageSCW4 hours ago

Why? They have yet to show anything to believe in except perhaps the embedded space.

Steinmark5 hours ago

[dead]

theodrica day ago

[flagged]

throwaway27448a day ago

[flagged]

primisa day ago

Hey! I get this is a throwaway account so you might not answer, but I really, really don't like opening an article and having the first thing I see in a thread be someone calling the author a slur. There are ways of expressing insult without bringing intellectual disabilities into the mix.

dmita day ago

For future readers: throwaway27448's comment used to say something completely different, featuring the r-slur, and then immediately edited.

throwaway2744815 hours ago

[flagged]

notenlish11 hours ago

Can you explain why you think the author is stupid.

[deleted]20 hours agocollapsed

throwaway2744815 hours ago

[flagged]

ephou7a day ago

Ulrich Drepper, Lennart Poettering, this clown. Red Hat seems to have a skill of hiring savants with high technical and low social aptitude.

devl54713 hours ago

Is it RISC-V or bloated software full of layered abstractions?

[deleted]13 hours agocollapsed

hn-front (c) 2024 voximity
source