TasksSotADatasetsPapersMethodsSubmitAbout
Papers With Code 2

A community resource for machine learning research: papers, code, benchmarks, and state-of-the-art results.

Explore

Notable BenchmarksAll SotADatasetsPapersMethods

Community

Submit ResultsAbout

Data sourced from the PWC Archive (CC-BY-SA 4.0). Built by the community, for the community.

Papers/Deep Residual Output Layers for Neural Language Generation

Deep Residual Output Layers for Neural Language Generation

Nikolaos Pappas, James Henderson

2019-05-14Machine TranslationText GenerationLanguage Modelling
PaperPDFCode(official)

Abstract

Many tasks, including language generation, benefit from learning the structure of the output space, particularly when the space of output labels is large and the data is sparse. State-of-the-art neural language models indirectly capture the output space structure in their classifier weights since they lack parameter sharing across output labels. Learning shared output label mappings helps, but existing methods have limited expressivity and are prone to overfitting. In this paper, we investigate the usefulness of more powerful shared mappings for output labels, and propose a deep residual output mapping with dropout between layers to better capture the structure of the output space and avoid overfitting. Evaluations on three language generation tasks show that our output label mapping can match or improve state-of-the-art recurrent and self-attention architectures, and suggest that the classifier does not necessarily need to be high-rank to better model natural language if it is better at capturing the structure of the output space.

Results

TaskDatasetMetricValueModel
Machine TranslationWMT2014 English-GermanBLEU score28.1Transformer-DRILL Base
Language ModellingPenn Treebank (Word Level)Test perplexity49.4AWD-LSTM-DRILL + dynamic eval
Language ModellingPenn Treebank (Word Level)Validation perplexity49.5AWD-LSTM-DRILL + dynamic eval
Language ModellingPenn Treebank (Word Level)Test perplexity55.7AWD-LSTM-DRILL
Language ModellingPenn Treebank (Word Level)Validation perplexity58.2AWD-LSTM-DRILL
Language ModellingWikiText-2Test perplexity42AWD-LSTM-DRILL + dynamic eval
Language ModellingWikiText-2Validation perplexity43.9AWD-LSTM-DRILL + dynamic eval
Language ModellingWikiText-2Test perplexity61.9AWD-LSTM-DRILL
Language ModellingWikiText-2Validation perplexity64.9AWD-LSTM-DRILL

Related Papers

Visual-Language Model Knowledge Distillation Method for Image Quality Assessment2025-07-21Making Language Model a Hierarchical Classifier and Generator2025-07-17VisionThink: Smart and Efficient Vision Language Model via Reinforcement Learning2025-07-17The Generative Energy Arena (GEA): Incorporating Energy Awareness in Large Language Model (LLM) Human Evaluations2025-07-17Inverse Reinforcement Learning Meets Large Language Model Post-Training: Basics, Advances, and Opportunities2025-07-17Mitigating Object Hallucinations via Sentence-Level Early Intervention2025-07-16Assay2Mol: large language model-based drug design using BioAssay context2025-07-16Describe Anything Model for Visual Question Answering on Text-rich Images2025-07-16