Hacker News

melenaboija
A not so fast implementation of cosine similarity in C++ and SIMD joseprupi.github.io

Const-me3 hours ago

> which means that not even writing C++ SIMD code will make me have a faster implementation than the one Python is using and I will probably have to write my own assembly code

I believe assembly is almost always the wrong choice in modern world. It’s just that your SIMD version is not very efficient.

Your original SIMD version completes in 0.065ms on my computer. Here’s an optimized version which completes in 0.038ms i.e. 1.7x faster: https://gist.github.com/Const-me/41b013229b20f920bcee22a856c... Note I have used 4 sets of the accumulators to workaround relatively high latency of the FMA instructions.

However, I’m pretty sure the implementation used by these Python libraries is leveraging multiple CPU cores under the hood. Here’s another C++ version which does that as well, it completed in 0.0136 ms on my computer i.e. 4.8x faster: https://gist.github.com/Const-me/c61e836bed08cef2f06783c7b11...

ashvardanian3 hours ago

The conclusion of the article isn't entirely accurate.

> Why? Because they are calculated using the BLAS library available in the OS, which means that not even writing C++ SIMD code will make me have a faster implementation than the one Python is using and I will probably have to write my own assembly code with compiler-like tricks to go as fast as Python plus C++ libraries.

Switching from C++ SIMD intrinsics to assembly won't yield significant performance improvements for cosine distance. The kernel is typically too small for reordering tricks to have much impact. In fact, you can already outperform both NumPy and SciPy with optimized intrinsics, as I discussed in a previous HN post (https://github.com/ashvardanian/simsimd?tab=readme-ov-file#c...). There's also a promising RFC in SciPy to allow pluggable backends, which could make integrating such kernels easier (https://github.com/scipy/scipy/issues/21645#issuecomment-239...).

For many-to-many distance calculations on low-dimensional vectors, the bottleneck is often in the reciprocal square roots. Using LibC is slow but highly accurate. Intrinsics are faster, but you'll need several Newton-Raphson iterations for precision, and the number of iterations varies between x86 and Arm architectures (https://github.com/ashvardanian/simsimd?tab=readme-ov-file#c...). When dealing with high-dimensional vectors, you start competing with BLAS implementations, though they, too, have limitations on newer hardware or in mixed precision scenarios.

Const-me3 hours ago

> Intrinsics are faster, but you'll need several Newton-Raphson iterations for precision

I wonder have you tried non-approximated intrinsics like _mm_div_ps( mul, _mm_sqrt_ps( div2 ) ) ?

The reason standard library is so slow is exception handling and other edge cases. On modern CPUs normal non-approximated FP division and square root instructions aren’t terribly slow, e.g. on my computer FP32 square root instruction has 15 cycles latency and 0.5 cycles throughput.

adgjlsfhk1an hour ago

yeah you generally can't approximate sqrt faster than computing it. sqrt is generally roughly as fast as division.

mkristiansen5 hours ago

This is really interesting. I have a couple of questions, mainly from the fact that the code is c++ code is about 2x slower than then Numpy.

I had a look at the assembly generated, both in your repo, and from https://godbolt.org/z/76K1eacsG

if you look at the assembly generated:

        vmovups ymm3, ymmword ptr [rdi + 4*rcx]

        vmovups ymm4, ymmword ptr [rsi + 4*rcx]

        add     rcx, 8

        vfmadd231ps     ymm2, ymm3, ymm4

        vfmadd231ps     ymm1, ymm3, ymm3

        vfmadd231ps     ymm0, ymm4, ymm4

        cmp     rcx, rax

        jb      .LBB0_10

        jmp     .LBB0_2
you are only using 5 of the sse2 registers(ymm0 -- ymm4) before creating a dependency on one of the (ymm0 -- ymm2) being used for the results.

I Wonder if widening your step size to contain more than one 256bit register might get you the speed up. Something like this (https://godbolt.org/z/GKExaoqcf) to get more of the sse2 registers in your CPU doing working.

        vmovups ymm6, ymmword ptr [rdi + 4*rcx]

        vmovups ymm8, ymmword ptr [rsi + 4*rcx]

        vmovups ymm7, ymmword ptr [rdi + 4*rcx + 32]

        vmovups ymm9, ymmword ptr [rsi + 4*rcx + 32]

        add     rcx, 16

        vfmadd231ps     ymm5, ymm6, ymm8

        vfmadd231ps     ymm4, ymm7, ymm9

        vfmadd231ps     ymm3, ymm6, ymm6

        vfmadd231ps     ymm2, ymm7, ymm7

        vfmadd231ps     ymm1, ymm8, ymm8

        vfmadd231ps     ymm0, ymm9, ymm9

        cmp     rcx, rax

        jb      .LBB0_10

        jmp     .LBB0_2

which ends up using 10 of the registers, allowing for 6 fused multiplies, rather than 3, before creating a dependency on a previous result -- you might be able to create a longer list.

Again -- This was a really interesting writeup :)

encypruon2 hours ago

> dot += A[i] * B[i];

Isn't it pretty bad for accuracy to accumulate large numbers of floats in this fashion? o.O In the example it's 640,000 numbers. log2(640,000) is ~19.3 but the significand of a float has only 23 bits plus an implicit one.

worstspotgain21 minutes ago

Interesting read. However, its ultimate conclusion is just the long way of saying that if you use ultra-optimized libraries, and you call them at a rate much lower than their inner loop rate, then it doesn't matter which language you wrap them with.

This is the standard counter to "interpreted languages are slow" and is as old as interpreting itself.

neonsunset5 hours ago

> but unless you opt to implement a processor-specific calculation in C++

Not necessarily true if you use C# (or Swift or Mojo) instead:

    static float CosineSimilarity(
        ref float a,
        ref float b,
        nuint length
    ) {
        var sdot = Vector256<float>.Zero;
        var sa = Vector256<float>.Zero;
        var sb = Vector256<float>.Zero;

        for (nuint i = 0; i < length; i += 8) {
            var bufa = Vector256.LoadUnsafe(ref a, i);
            var bufb = Vector256.LoadUnsafe(ref b, i);

            sdot = Vector256.FusedMultiplyAdd(bufa, bufb, sdot);
            sa = Vector256.FusedMultiplyAdd(bufa, bufa, sa);
            sb = Vector256.FusedMultiplyAdd(bufb, bufb, sb);
        }

        var fdot = Vector256.Sum(sdot);
        var fanorm = Vector256.Sum(sa);
        var fbnorm = Vector256.Sum(sb);

        return fdot / MathF.Sqrt(fanorm) * MathF.Sqrt(fbnorm);
    }
Compiles to appropriate codegen quality: https://godbolt.org/z/hh16974Gd, on ARM64 it's correctly unrolled to 128x2

Edit: as sibling comment mentioned, this benefits from unrolling, which would require swapping 256 with 512 and += 8 with 16 in the snippet above, although depending on your luck Godbolt runs this on CPU with AVX512 so you don't see the unrolling as it just picks ZMM registers supported by the hardware instead :)

QuadmasterXLII4 hours ago

I’m really surprised by the performance of the plain C++ version. Is automatic vectorization turned off? Frankly this task is so common that I would half expect compilers to have a hard coded special case specifically for fast dot products

Edit: Yeah, when I compile the “plain c++” with clang the main loop is 8 vmovups, 16 vfmadd231ps, and an add cmp jne. OP forgot some flags.

mshockwavean hour ago

which flags did you use and which compiler version?

QuadmasterXLIIan hour ago

clang 19, -O3 -ffast-math -march=native

mshockwavean hour ago

can confirm fast math makes the biggest difference

mkristiansen40 minutes ago

Fast Math basically means "who cares about standards just add in whatever order you want" :)

hn-front (c) 2024 voximity
source