TasksSotADatasetsPapersMethodsSubmitAbout
Papers With Code 2

A community resource for machine learning research: papers, code, benchmarks, and state-of-the-art results.

Explore

Notable BenchmarksAll SotADatasetsPapersMethods

Community

Submit ResultsAbout

Data sourced from the PWC Archive (CC-BY-SA 4.0). Built by the community, for the community.

Papers/RES-Q: Evaluating Code-Editing Large Language Model System...

RES-Q: Evaluating Code-Editing Large Language Model Systems at the Repository Scale

Beck Labash, August Rosedale, Alex Reents, Lucas Negritto, Colin Wiel

2024-06-24Instruction FollowingNavigateLarge Language ModelCode GenerationLanguage ModellingHumanEval
PaperPDFCode(official)

Abstract

The instruction-following ability of Large Language Models (LLMs) has cultivated a class of LLM-based systems capable of approaching complex tasks such as making edits to large code repositories. Due to the high sensitivity and unpredictability of LLM behavior in response to changes in prompting, robust evaluation tools are needed to drive future iteration of these systems. We propose RES-Q, a natural language instruction-based benchmark for evaluating $\textbf{R}$epository $\textbf{E}$diting $\textbf{S}$ystems, which consists of 100 handcrafted repository editing tasks derived from real GitHub commits. Given an edit instruction and a code repository, RES-Q evaluates an LLM system's ability to interpret the instruction, navigate the repository to gather relevant information, and construct an appropriate edit that satisfies the specified criteria. We argue that evaluating LLMs in this way addresses issues with traditional benchmarks and provides a more holistic assessment of a model's abilities. We evaluate various state-of-the-art LLMs as language agents in a repository-editing system built on Qurrent OS, our language agent development software. Despite their 1% pass@1 performance difference on HumanEval, we find Claude Sonnet 3.5 outperforms GPT-4o by 12% pass@1 on RES-Q, indicating RES-Q's capacity to differentiate model capability as traditional benchmarks approach saturation. We further investigate token efficiency, performance relationships with existing benchmarks, and interesting disparities between closed and open-source LLMs. Code and dataset are available at https://github.com/Qurrent-AI/RES-Q.

Results

TaskDatasetMetricValueModel
Code GenerationRES-Qpass@158QurrentOS-coder + Claude 3.5 Sonnet
Code GenerationRES-Qpass@146QurrentOS-coder + GPT-4o
Code GenerationRES-Qpass@137QurrentOS-coder + GPT-4 Turbo
Code GenerationRES-Qpass@136QurrentOS-coder + Claude 3 Opus
Code GenerationRES-Qpass@130QurrentOS-coder + GPT-4
Code GenerationRES-Qpass@130QurrentOS-coder + Gemini 1.5 Pro
Code GenerationRES-Qpass@129QurrentOS-coder + DeepSeek-Coder-V2
Code GenerationRES-Qpass@120QurrentOS-coder + Llama 3 70b
Code GenerationRES-Qpass@118QurrentOS-coder + Qwen-72B-Instruct

Related Papers

Visual-Language Model Knowledge Distillation Method for Image Quality Assessment2025-07-21DENSE: Longitudinal Progress Note Generation with Temporal Modeling of Heterogeneous Clinical Notes Across Hospital Visits2025-07-18CUDA-L1: Improving CUDA Optimization via Contrastive Reinforcement Learning2025-07-18AnyCap Project: A Unified Framework, Dataset, and Benchmark for Controllable Omni-modal Captioning2025-07-17GeoReg: Weight-Constrained Few-Shot Regression for Socio-Economic Estimation using LLM2025-07-17The Generative Energy Arena (GEA): Incorporating Energy Awareness in Large Language Model (LLM) Human Evaluations2025-07-17Inverse Reinforcement Learning Meets Large Language Model Post-Training: Basics, Advances, and Opportunities2025-07-17Rethinking the Embodied Gap in Vision-and-Language Navigation: A Holistic Study of Physical and Visual Disparities2025-07-17