TasksSotADatasetsPapersMethodsSubmitAbout
Papers With Code 2

A community resource for machine learning research: papers, code, benchmarks, and state-of-the-art results.

Explore

Notable BenchmarksAll SotADatasetsPapersMethods

Community

Submit ResultsAbout

Data sourced from the PWC Archive (CC-BY-SA 4.0). Built by the community, for the community.

Papers/RankRAG: Unifying Context Ranking with Retrieval-Augmented...

RankRAG: Unifying Context Ranking with Retrieval-Augmented Generation in LLMs

Yue Yu, Wei Ping, Zihan Liu, Boxin Wang, Jiaxuan You, Chao Zhang, Mohammad Shoeybi, Bryan Catanzaro

2024-07-02Question AnsweringRetrievalAnswer GenerationRAG
PaperPDF

Abstract

Large language models (LLMs) typically utilize the top-k contexts from a retriever in retrieval-augmented generation (RAG). In this work, we propose a novel instruction fine-tuning framework RankRAG, which instruction-tunes a single LLM for the dual purpose of context ranking and answer generation in RAG. In particular, the instruction-tuned LLMs work surprisingly well by adding a small fraction of ranking data into the training blend, and outperform existing expert ranking models, including the same LLM exclusively fine-tuned on a large amount of ranking data. For generation, we compare our model with many strong baselines, including GPT-4-0613, GPT-4-turbo-2024-0409, and ChatQA-1.5, an open-sourced model with the state-of-the-art performance on RAG benchmarks. Specifically, our Llama3-RankRAG significantly outperforms Llama3-ChatQA-1.5 and GPT-4 models on nine knowledge-intensive benchmarks. In addition, it also performs comparably to GPT-4 on five RAG benchmarks in the biomedical domain without instruction fine-tuning on biomedical data, demonstrating its superb capability for generalization to new domains.

Results

TaskDatasetMetricValueModel
Question AnsweringNatural QuestionsEM54.2RankRAG-llama3-70b (Zero-Shot, KILT)
Question AnsweringNatural QuestionsEM50.6RankRAG-llama3-8b (Zero-Shot, KILT)
Question AnsweringNatural QuestionsEM50RankRAG-llama3-70b (Zero-Shot, DPR)
Question AnsweringNatural QuestionsEM46.1RankRAG-llama3-8b (Zero-Shot, DPR)
Question AnsweringPubMedQAAccuracy79.8RankRAG-llama3-70B (Zero-Shot)
Question AnsweringTriviaQAEM86.5RankRAG-llama3-70b (Zero-Shot, KILT)
Question AnsweringTriviaQAEM82.9RankRAG-llama3-8b (Zero-Shot, KILT)
Question AnsweringTriviaQAEM72.6RankRAG-llama3-70b (Zero-Shot, DPR)

Related Papers

From Roots to Rewards: Dynamic Tree Reasoning with RL2025-07-17Enter the Mind Palace: Reasoning and Planning for Long-term Active Embodied Question Answering2025-07-17Vision-and-Language Training Helps Deploy Taxonomic Knowledge but Does Not Fundamentally Alter It2025-07-17City-VLM: Towards Multidomain Perception Scene Understanding via Multimodal Incomplete Learning2025-07-17HapticCap: A Multimodal Dataset and Task for Understanding User Experience of Vibration Haptic Signals2025-07-17A Survey of Context Engineering for Large Language Models2025-07-17MCoT-RE: Multi-Faceted Chain-of-Thought and Re-Ranking for Training-Free Zero-Shot Composed Image Retrieval2025-07-17Describe Anything Model for Visual Question Answering on Text-rich Images2025-07-16