TasksSotADatasetsPapersMethodsSubmitAbout
Papers With Code 2

A community resource for machine learning research: papers, code, benchmarks, and state-of-the-art results.

Explore

Notable BenchmarksAll SotADatasetsPapersMethods

Community

Submit ResultsAbout

Data sourced from the PWC Archive (CC-BY-SA 4.0). Built by the community, for the community.

Papers/LEVER: Learning to Verify Language-to-Code Generation with...

LEVER: Learning to Verify Language-to-Code Generation with Execution

Ansong Ni, Srini Iyer, Dragomir Radev, Ves Stoyanov, Wen-tau Yih, Sida I. Wang, Xi Victoria Lin

2023-02-16Semantic ParsingMathRerankingText-To-SQLArithmetic ReasoningCode Generation
PaperPDFCode(official)

Abstract

The advent of large language models trained on code (code LLMs) has led to significant progress in language-to-code generation. State-of-the-art approaches in this area combine LLM decoding with sample pruning and reranking using test cases or heuristics based on the execution results. However, it is challenging to obtain test cases for many real-world language-to-code applications, and heuristics cannot well capture the semantic features of the execution results, such as data type and value range, which often indicates the correctness of the program. In this work, we propose LEVER, a simple approach to improve language-to-code generation by learning to verify the generated programs with their execution results. Specifically, we train verifiers to determine whether a program sampled from the LLMs is correct or not based on the natural language input, the program itself and its execution results. The sampled programs are reranked by combining the verification score with the LLM generation probability, and marginalizing over programs with the same execution results. On four datasets across the domains of table QA, math QA and basic Python programming, LEVER consistently improves over the base code LLMs(4.6% to 10.9% with code-davinci-002) and achieves new state-of-the-art results on all of them.

Results

TaskDatasetMetricValueModel
Code GenerationMBPPAccuracy68.9code-davinci-002 175B + LEVER
Semantic ParsingspiderAccuracy81.9code-davinci-002 175B (LEVER)
Semantic ParsingWikiTableQuestionsAccuracy (Dev)64.6LEVER
Semantic ParsingWikiTableQuestionsAccuracy (Test)65.8LEVER
Semantic ParsingspiderExecution Accuracy (Dev)81.9code-davinci-002 175B (LEVER)
Text-To-SQLspiderExecution Accuracy (Dev)81.9code-davinci-002 175B (LEVER)
Arithmetic ReasoningGSM8KAccuracy84.5code-davinci-002 175B (LEVER, 8-shot)
Arithmetic ReasoningGSM8KParameters (Billion)175code-davinci-002 175B (LEVER, 8-shot)

Related Papers

CUDA-L1: Improving CUDA Optimization via Contrastive Reinforcement Learning2025-07-18VAR-MATH: Probing True Mathematical Reasoning in Large Language Models via Symbolic Multi-Instance Benchmarks2025-07-17QuestA: Expanding Reasoning Capacity in LLMs via Question Augmentation2025-07-17Towards Formal Verification of LLM-Generated Code from Natural Language Prompts2025-07-17Scaling Up RL: Unlocking Diverse Reasoning in LLMs via Prolonged Training2025-07-16MERA Code: A Unified Framework for Evaluating Code Generation Across Tasks2025-07-16Temperature and Persona Shape LLM Agent Consensus With Minimal Accuracy Gains in Qualitative Coding2025-07-15Personalized Exercise Recommendation with Semantically-Grounded Knowledge Tracing2025-07-15