TasksSotADatasetsPapersMethodsSubmitAbout
Papers With Code 2

A community resource for machine learning research: papers, code, benchmarks, and state-of-the-art results.

Explore

Notable BenchmarksAll SotADatasetsPapersMethods

Community

Submit ResultsAbout

Data sourced from the PWC Archive (CC-BY-SA 4.0). Built by the community, for the community.

Papers/Learning to Execute Programs with Instruction Pointer Atte...

Learning to Execute Programs with Instruction Pointer Attention Graph Neural Networks

David Bieber, Charles Sutton, Hugo Larochelle, Daniel Tarlow

2020-10-23NeurIPS 2020 12Program RepairProgram SynthesisCode CompletionSystematic Generalization
PaperPDFCode(official)

Abstract

Graph neural networks (GNNs) have emerged as a powerful tool for learning software engineering tasks including code completion, bug finding, and program repair. They benefit from leveraging program structure like control flow graphs, but they are not well-suited to tasks like program execution that require far more sequential reasoning steps than number of GNN propagation steps. Recurrent neural networks (RNNs), on the other hand, are well-suited to long sequential chains of reasoning, but they do not naturally incorporate program structure and generally perform worse on the above tasks. Our aim is to achieve the best of both worlds, and we do so by introducing a novel GNN architecture, the Instruction Pointer Attention Graph Neural Networks (IPA-GNN), which achieves improved systematic generalization on the task of learning to execute programs using control flow graphs. The model arises by considering RNNs operating on program traces with branch decisions as latent variables. The IPA-GNN can be seen either as a continuous relaxation of the RNN model or as a GNN variant more tailored to execution. To test the models, we propose evaluating systematic generalization on learning to execute using control flow graphs, which tests sequential reasoning and use of program structure. More practically, we evaluate these models on the task of learning to execute partial programs, as might arise if using the model as a heuristic function in program synthesis. Results show that the IPA-GNN outperforms a variety of RNN and GNN baselines on both tasks.

Related Papers

CoRE: Enhancing Metacognition with Label-free Self-evaluation in LRMs2025-07-08CORE: Benchmarking LLMs Code Reasoning Capabilities through Static Analysis Tasks2025-07-03$T^3$: Multi-level Tree-based Automatic Program Repair with Large Language Models2025-06-26Beyond Autocomplete: Designing CopilotLens Towards Transparent and Explainable AI Coding Agents2025-06-24Understanding Software Engineering Agents: A Study of Thought-Action-Result Trajectories2025-06-23Plan for Speed -- Dilated Scheduling for Masked Diffusion Language Models2025-06-23Dissecting the SWE-Bench Leaderboards: Profiling Submitters and Architectures of LLM- and Agent-Based Repair Systems2025-06-20SemAgent: A Semantics Aware Program Repair Agent2025-06-19