TasksSotADatasetsPapersMethodsSubmitAbout
Papers With Code 2

A community resource for machine learning research: papers, code, benchmarks, and state-of-the-art results.

Explore

Notable BenchmarksAll SotADatasetsPapersMethods

Community

Submit ResultsAbout

Data sourced from the PWC Archive (CC-BY-SA 4.0). Built by the community, for the community.

Papers/Judging LLM-as-a-Judge with MT-Bench and Chatbot Arena

Judging LLM-as-a-Judge with MT-Bench and Chatbot Arena

Lianmin Zheng, Wei-Lin Chiang, Ying Sheng, Siyuan Zhuang, Zhanghao Wu, Yonghao Zhuang, Zi Lin, Zhuohan Li, Dacheng Li, Eric P. Xing, Hao Zhang, Joseph E. Gonzalez, Ion Stoica

2023-06-09NeurIPS 2023 11Long-Context UnderstandingChatbotLarge Language ModelLanguage Modelling
PaperPDFCodeCodeCodeCodeCode(official)CodeCodeCodeCodeCodeCode

Abstract

Evaluating large language model (LLM) based chat assistants is challenging due to their broad capabilities and the inadequacy of existing benchmarks in measuring human preferences. To address this, we explore using strong LLMs as judges to evaluate these models on more open-ended questions. We examine the usage and limitations of LLM-as-a-judge, including position, verbosity, and self-enhancement biases, as well as limited reasoning ability, and propose solutions to mitigate some of them. We then verify the agreement between LLM judges and human preferences by introducing two benchmarks: MT-bench, a multi-turn question set; and Chatbot Arena, a crowdsourced battle platform. Our results reveal that strong LLM judges like GPT-4 can match both controlled and crowdsourced human preferences well, achieving over 80% agreement, the same level of agreement between humans. Hence, LLM-as-a-judge is a scalable and explainable way to approximate human preferences, which are otherwise very expensive to obtain. Additionally, we show our benchmark and traditional benchmarks complement each other by evaluating several variants of LLaMA and Vicuna. The MT-bench questions, 3K expert votes, and 30K conversations with human preferences are publicly available at https://github.com/lm-sys/FastChat/tree/main/fastchat/llm_judge.

Results

TaskDatasetMetricValueModel
Long-Context UnderstandingAda-LEval (BestAnswer)12k1.4Vicuna-13b-v1.5-16k
Long-Context UnderstandingAda-LEval (BestAnswer)16k0.9Vicuna-13b-v1.5-16k
Long-Context UnderstandingAda-LEval (BestAnswer)1k53.4Vicuna-13b-v1.5-16k
Long-Context UnderstandingAda-LEval (BestAnswer)2k29.2Vicuna-13b-v1.5-16k
Long-Context UnderstandingAda-LEval (BestAnswer)4k13.1Vicuna-13b-v1.5-16k
Long-Context UnderstandingAda-LEval (BestAnswer)6k4.3Vicuna-13b-v1.5-16k
Long-Context UnderstandingAda-LEval (BestAnswer)8k2.2Vicuna-13b-v1.5-16k
Long-Context UnderstandingAda-LEval (BestAnswer)12k1.9Vicuna-7b-v1.5-16k
Long-Context UnderstandingAda-LEval (BestAnswer)16k1Vicuna-7b-v1.5-16k
Long-Context UnderstandingAda-LEval (BestAnswer)1k37Vicuna-7b-v1.5-16k
Long-Context UnderstandingAda-LEval (BestAnswer)2k11.1Vicuna-7b-v1.5-16k
Long-Context UnderstandingAda-LEval (BestAnswer)4k5.8Vicuna-7b-v1.5-16k
Long-Context UnderstandingAda-LEval (BestAnswer)6k3.2Vicuna-7b-v1.5-16k
Long-Context UnderstandingAda-LEval (BestAnswer)8k1.8Vicuna-7b-v1.5-16k
Long-Context UnderstandingAda-LEval (BestAnswer)12k1.6LongChat-7b-v1.5-32k
Long-Context UnderstandingAda-LEval (BestAnswer)16k0.8LongChat-7b-v1.5-32k
Long-Context UnderstandingAda-LEval (BestAnswer)1k32.4LongChat-7b-v1.5-32k
Long-Context UnderstandingAda-LEval (BestAnswer)2k10.7LongChat-7b-v1.5-32k
Long-Context UnderstandingAda-LEval (BestAnswer)4k5.7LongChat-7b-v1.5-32k
Long-Context UnderstandingAda-LEval (BestAnswer)6k3.1LongChat-7b-v1.5-32k
Long-Context UnderstandingAda-LEval (BestAnswer)8k1.9LongChat-7b-v1.5-32k
Long-Context UnderstandingAda-LEval (TSort)16k3.1Vicuna-13b-v1.5-16k
Long-Context UnderstandingAda-LEval (TSort)2k5.4Vicuna-13b-v1.5-16k
Long-Context UnderstandingAda-LEval (TSort)4k5Vicuna-13b-v1.5-16k
Long-Context UnderstandingAda-LEval (TSort)8k2.4Vicuna-13b-v1.5-16k
Long-Context UnderstandingAda-LEval (TSort)16k2.5LongChat-7b-v1.5-32k
Long-Context UnderstandingAda-LEval (TSort)2k5.3LongChat-7b-v1.5-32k
Long-Context UnderstandingAda-LEval (TSort)4k5LongChat-7b-v1.5-32k
Long-Context UnderstandingAda-LEval (TSort)8k3.1LongChat-7b-v1.5-32k
Long-Context UnderstandingAda-LEval (TSort)16k1.7Vicuna-7b-v1.5-16k
Long-Context UnderstandingAda-LEval (TSort)2k5.3Vicuna-7b-v1.5-16k
Long-Context UnderstandingAda-LEval (TSort)4k2.2Vicuna-7b-v1.5-16k
Long-Context UnderstandingAda-LEval (TSort)8k2.3Vicuna-7b-v1.5-16k

Related Papers

Visual-Language Model Knowledge Distillation Method for Image Quality Assessment2025-07-21DENSE: Longitudinal Progress Note Generation with Temporal Modeling of Heterogeneous Clinical Notes Across Hospital Visits2025-07-18GeoReg: Weight-Constrained Few-Shot Regression for Socio-Economic Estimation using LLM2025-07-17The Generative Energy Arena (GEA): Incorporating Energy Awareness in Large Language Model (LLM) Human Evaluations2025-07-17Inverse Reinforcement Learning Meets Large Language Model Post-Training: Basics, Advances, and Opportunities2025-07-17Rethinking the Embodied Gap in Vision-and-Language Navigation: A Holistic Study of Physical and Visual Disparities2025-07-17Making Language Model a Hierarchical Classifier and Generator2025-07-17VisionThink: Smart and Efficient Vision Language Model via Reinforcement Learning2025-07-17