TasksSotADatasetsPapersMethodsSubmitAbout
Papers With Code 2

A community resource for machine learning research: papers, code, benchmarks, and state-of-the-art results.

Explore

Notable BenchmarksAll SotADatasetsPapersMethods

Community

Submit ResultsAbout

Data sourced from the PWC Archive (CC-BY-SA 4.0). Built by the community, for the community.

Papers/Planning-Driven Programming: A Large Language Model Progra...

Planning-Driven Programming: A Large Language Model Programming Workflow

Chao Lei, Yanchuan Chang, Nir Lipovetzky, Krista A. Ehinger

2024-11-21Text-to-Code GenerationProgram RepairLarge Language ModelCode GenerationLanguage ModellingHumanEval
PaperPDFCode(official)

Abstract

The strong performance of large language models (LLMs) raises extensive discussion on their application to code generation. Recent research suggests continuous program refinements through visible tests to improve code generation accuracy in LLMs. However, these methods suffer from LLMs' inefficiency and limited reasoning capacity. In this work, we propose an LLM programming workflow (LPW) designed to improve both initial code generation and subsequent refinements within a structured two-phase workflow. Specifically, the solution generation phase formulates a solution plan, which is then verified through visible tests to specify the intended natural language solution. Subsequently, the code implementation phase drafts an initial code according to the solution plan and its verification. If the generated code fails the visible tests, the plan verification serves as the intended solution to consistently inform the refinement process for correcting bugs. Compared to state-of-the-art methods across various existing LLMs, LPW significantly improves the Pass@1 accuracy by up to 16.4% on well-established text-to-code generation benchmarks. LPW also sets new state-of-the-art Pass@1 accuracy, achieving 98.2% on HumanEval, 84.8% on MBPP, 59.3% on LiveCode, 62.6% on APPS, and 34.7% on CodeContest, using GPT-4o as the backbone. Our code is publicly available at: https://github.com/you68681/lpw

Results

TaskDatasetMetricValueModel
Code GenerationHumanEval-ETPass@165.8LPW (GPT-4o)
Code GenerationAPPSCompetition Pass@134.8LPW (GPT-4o)
Code GenerationAPPSInterview Pass@165.2LPW (GPT-4o)
Code GenerationAPPSIntroductory Pass@187.2LPW (GPT-4o)
Code GenerationAPPSPass@162.6LPW (GPT-4o)
Code GenerationCodeContestsTest Set pass@134.7LPW (GPT-4o)
Code GenerationMBPPAccuracy84.8LPW (GPT-4o)
Code GenerationLivecodebenchAcc59.3LPW (GPT-4o)
Code GenerationMBPP-ETPass@165.8LPW (GPT-4o)
Code GenerationHumanEvalPass@198.2Phi-2

Related Papers

Visual-Language Model Knowledge Distillation Method for Image Quality Assessment2025-07-21DENSE: Longitudinal Progress Note Generation with Temporal Modeling of Heterogeneous Clinical Notes Across Hospital Visits2025-07-18CUDA-L1: Improving CUDA Optimization via Contrastive Reinforcement Learning2025-07-18GeoReg: Weight-Constrained Few-Shot Regression for Socio-Economic Estimation using LLM2025-07-17The Generative Energy Arena (GEA): Incorporating Energy Awareness in Large Language Model (LLM) Human Evaluations2025-07-17Inverse Reinforcement Learning Meets Large Language Model Post-Training: Basics, Advances, and Opportunities2025-07-17Rethinking the Embodied Gap in Vision-and-Language Navigation: A Holistic Study of Physical and Visual Disparities2025-07-17Towards Formal Verification of LLM-Generated Code from Natural Language Prompts2025-07-17