Traditional vector-based RAG often struggles with retrieval accuracy because it optimizes for similarity, not relevance. But what we truly need in retrieval is relevance, which requires reasoning. When working with professional documents that require domain expertise and multi-step reasoning, vector-based RAG and similarity search often fall short.
So we started exploring a more reasoning-driven approach to RAG. Reasoning-based RAG enables LLMs to think and reason their way to the most relevant document sections. Inspired by AlphaGo, we propose to use tree search to perform structured document retrieval.
We open-sourced one of the key components: PageIndex. PageIndex is a hierarchical Document Indexing system that builds search trees from long documents (like financial reports, regulatory documents, or textbooks), making them ready for reasoning-based RAG.
Some highlights:
- Hierarchical Structure: Organizes lengthy PDFs into LLM-friendly trees — like a smart table of contents.
- Precise Referencing: Each node includes a summary and exact physical page numbers.
- Natural Segmentation: Nodes align with document sections, preserving context — no arbitrary chunking.
We've used PageIndex for financial document analysis with reasoning-based RAG and saw significant improvements in retrieval accuracy compared to vector-based systems.
Would love any feedback — especially thoughts on reasoning-based RAG, or ideas for where PageIndex could be applied!
vectify_AIop8 days ago
Github repo: github.com/VectifyAI/PageIndex
BizarroLand8 days ago
Is there a plan to allow localhosting with ollama or pinokio or llmstudio?
casenmgreen8 days ago
Can this system explain its reasoning, and so explain its answer?
vectify_AIop8 days ago
Yes, the explanation and reasons for relevance can be included in the search and reflected in the answer.
casenmgreen8 days ago
Looking through the repo, reading the doc, an LLM looks to be part of the implementation. LLMs cannot explain their reasoning, so if there is an LLM, then the system as a whole cannot explain its reasoning, because part of the system is a black box? reasoning can be explained up to the point the LLM comes into play, and also then afterwards, with whatever is done with LLM output?
curl-up8 days ago
Can you explain your reasoning?
ckrapu8 days ago
Makes perfect sense. Looking forward to trying this.
vectify_AIop8 days ago
[dead]
xuqian58 days ago
great work mate!