Hacker News

Pringled
Show HN: Model2vec – Lightning-fast Static Embeddings for RAG/Semantic Search github.com

We’ve recently open-sourced Model2vec, a method to distill sentence transformers into static embeddings that outperform all previous approaches by a large margin on MTEB. Our new models set a new state-of-the-art for static embeddings. Main features:

- Our best model (potion-base-8M) has only 8M parameters, which is ~30mb on disk

- Inference is ~500x faster than the distilled base model (bge-base), on a CPU

- New models can be distilled in 30 seconds on a CPU without requiring a dataset - just a vocabulary

- Numpy-only inference: The packaged can be install the package with minimal dependencies for lightweight deployments

- The library is integrated in SentenceTransformers, making it easy to use with other popular libraries

We built this because we think static embeddings can provide a hardware friendly alternative to many of the larger embedding models out there, while still being performant enough to power usecases such as RAG, or semantic search. We are curious to hear your feedback on this and whether there’s any usecases you can think of that we have not explored yet!

Link to the code and results: https://github.com/MinishLab/model2vec


bturtel2 days ago

This seems awesome for enabling RAG queries for on-device LLMs.

jerpint3 days ago

I wonder at what point it will be ~as much overhead to pass through a subset of the data with a small yet capable and fast LLM vs. using a crude dot product when doing retrieval

Pringledop3 days ago

I think a combination works quite well: first getting a small set of candidates from all the data using a lightweight model, and the using a heavy-duty model to rerank the results and get the final candidates.

protoshell2483 days ago

10K embeddings generated in under 700 milliseconds!!!

hn-front (c) 2024 voximity
source