TasksSotADatasetsPapersMethodsSubmitAbout
Papers With Code 2

A community resource for machine learning research: papers, code, benchmarks, and state-of-the-art results.

Explore

Notable BenchmarksAll SotADatasetsPapersMethods

Community

Submit ResultsAbout

Data sourced from the PWC Archive (CC-BY-SA 4.0). Built by the community, for the community.

Papers/Latent Predictor Networks for Code Generation

Latent Predictor Networks for Code Generation

Wang Ling, Edward Grefenstette, Karl Moritz Hermann, Tomáš Kočiský, Andrew Senior, Fumin Wang, Phil Blunsom

2016-03-22ACL 2016 8Card GamesText GenerationCode Generation
PaperPDFCode(official)Code

Abstract

Many language generation tasks require the production of text conditioned on both structured and unstructured inputs. We present a novel neural network architecture which generates an output sequence conditioned on an arbitrary number of input functions. Crucially, our approach allows both the choice of conditioning context and the granularity of generation, for example characters or tokens, to be marginalised, thus permitting scalable and effective training. Using this framework, we address the problem of generating programming code from a mixed natural language and structured specification. We create two new data sets for this paradigm derived from the collectible trading card games Magic the Gathering and Hearthstone. On these, and a third preexisting corpus, we demonstrate that marginalising multiple predictors allows our model to outperform strong benchmarks.

Results

TaskDatasetMetricValueModel
Code GenerationDjangoAccuracy62.3lpn (Ling et al., 2016)
Code GenerationDjangoBLEU Score77.6lpn (Ling et al., 2016)
Code GenerationDjangoAccuracy31.5Phrasal Statistical MT (Ling et al., 2016)
Code GenerationDjangoBLEU Score47.6Phrasal Statistical MT (Ling et al., 2016)

Related Papers

CUDA-L1: Improving CUDA Optimization via Contrastive Reinforcement Learning2025-07-18Making Language Model a Hierarchical Classifier and Generator2025-07-17Towards Formal Verification of LLM-Generated Code from Natural Language Prompts2025-07-17Mitigating Object Hallucinations via Sentence-Level Early Intervention2025-07-16MERA Code: A Unified Framework for Evaluating Code Generation Across Tasks2025-07-16Scaling Up RL: Unlocking Diverse Reasoning in LLMs via Prolonged Training2025-07-16The Devil behind the mask: An emergent safety vulnerability of Diffusion LLMs2025-07-15Seq vs Seq: An Open Suite of Paired Encoders and Decoders2025-07-15