TasksSotADatasetsPapersMethodsSubmitAbout
Papers With Code 2

A community resource for machine learning research: papers, code, benchmarks, and state-of-the-art results.

Explore

Notable BenchmarksAll SotADatasetsPapersMethods

Community

Submit ResultsAbout

Data sourced from the PWC Archive (CC-BY-SA 4.0). Built by the community, for the community.

Papers/Instruction-Following Evaluation for Large Language Models

Instruction-Following Evaluation for Large Language Models

Jeffrey Zhou, Tianjian Lu, Swaroop Mishra, Siddhartha Brahma, Sujoy Basu, Yi Luan, Denny Zhou, Le Hou

2023-11-14Instruction Following
PaperPDFCodeCodeCode(official)Code

Abstract

One core capability of Large Language Models (LLMs) is to follow natural language instructions. However, the evaluation of such abilities is not standardized: Human evaluations are expensive, slow, and not objectively reproducible, while LLM-based auto-evaluation is potentially biased or limited by the ability of the evaluator LLM. To overcome these issues, we introduce Instruction-Following Eval (IFEval) for large language models. IFEval is a straightforward and easy-to-reproduce evaluation benchmark. It focuses on a set of "verifiable instructions" such as "write in more than 400 words" and "mention the keyword of AI at least 3 times". We identified 25 types of those verifiable instructions and constructed around 500 prompts, with each prompt containing one or more verifiable instructions. We show evaluation results of two widely available LLMs on the market. Our code and data can be found at https://github.com/google-research/google-research/tree/master/instruction_following_eval

Results

TaskDatasetMetricValueModel
Instruction FollowingIFEvalInst-level loose-accuracy85.37GPT-4
Instruction FollowingIFEvalInst-level strict-accuracy83.57GPT-4
Instruction FollowingIFEvalPrompt-level loose-accuracy79.3GPT-4
Instruction FollowingIFEvalPrompt-level strict-accuracy76.89GPT-4
Instruction FollowingIFEvalInst-level loose-accuracy59.11PaLM 2 S
Instruction FollowingIFEvalInst-level strict-accuracy55.76PaLM 2 S
Instruction FollowingIFEvalPrompt-level loose-accuracy46.95PaLM 2 S
Instruction FollowingIFEvalPrompt-level strict-accuracy43.07PaLM 2 S

Related Papers

AnyCap Project: A Unified Framework, Dataset, and Benchmark for Controllable Omni-modal Captioning2025-07-17How Many Instructions Can LLMs Follow at Once?2025-07-15DrafterBench: Benchmarking Large Language Models for Tasks Automation in Civil Engineering2025-07-15Multilingual Multimodal Software Developer for Code Generation2025-07-11TuneShield: Mitigating Toxicity in Conversational AI while Fine-tuning on Untrusted Data2025-07-08DeSTA2.5-Audio: Toward General-Purpose Large Audio Language Model with Self-Generated Cross-Modal Alignment2025-07-03Meta SecAlign: A Secure Foundation LLM Against Prompt Injection Attacks2025-07-03Kwai Keye-VL Technical Report2025-07-02