TasksSotADatasetsPapersMethodsSubmitAbout
Papers With Code 2

A community resource for machine learning research: papers, code, benchmarks, and state-of-the-art results.

Explore

Notable BenchmarksAll SotADatasetsPapersMethods

Community

Submit ResultsAbout

Data sourced from the PWC Archive (CC-BY-SA 4.0). Built by the community, for the community.

Papers/Mogrifier LSTM

Mogrifier LSTM

Gábor Melis, Tomáš Kočiský, Phil Blunsom

2019-09-04ICLR 2020 1Language Modelling
PaperPDFCodeCodeCode(official)

Abstract

Many advances in Natural Language Processing have been based upon more expressive models for how inputs interact with the context in which they occur. Recurrent networks, which have enjoyed a modicum of success, still lack the generalization and systematicity ultimately required for modelling language. In this work, we propose an extension to the venerable Long Short-Term Memory in the form of mutual gating of the current input and the previous output. This mechanism affords the modelling of a richer space of interactions between inputs and their context. Equivalently, our model can be viewed as making the transition function given by the LSTM context-dependent. Experiments demonstrate markedly improved generalization on language modelling in the range of 3-4 perplexity points on Penn Treebank and Wikitext-2, and 0.01-0.05 bpc on four character-based datasets. We establish a new state of the art on all datasets with the exception of Enwik8, where we close a large gap between the LSTM and Transformer models.

Results

TaskDatasetMetricValueModel
Language ModellingPenn Treebank (Word Level)Test perplexity44.9Mogrifier LSTM + dynamic eval
Language ModellingPenn Treebank (Word Level)Validation perplexity44.8Mogrifier LSTM + dynamic eval
Language ModellingPenn Treebank (Character Level)Bit per Character (BPC)1.083Mogrifier LSTM + dynamic eval
Language ModellingPenn Treebank (Character Level)Bit per Character (BPC)1.12Mogrifier LSTM
Language ModellingHutter PrizeBit per Character (BPC)0.988Mogrifier LSTM + dynamic eval
Language ModellingHutter PrizeBit per Character (BPC)1.122Mogrifier LSTM
Language ModellingWikiText-2Test perplexity38.6Mogrifier LSTM + dynamic eval
Language ModellingWikiText-2Validation perplexity40.2Mogrifier LSTM + dynamic eval
Language ModellingWikiText-2Test perplexity55.1Mogrifier LSTM
Language ModellingWikiText-2Validation perplexity57.3Mogrifier LSTM
Language Modellingenwik8Bit per Character (BPC)1.146Mogrifier LSTM
Language Modellingenwik8Bit per Character (BPC)1.195LSTM

Related Papers

Visual-Language Model Knowledge Distillation Method for Image Quality Assessment2025-07-21Making Language Model a Hierarchical Classifier and Generator2025-07-17VisionThink: Smart and Efficient Vision Language Model via Reinforcement Learning2025-07-17The Generative Energy Arena (GEA): Incorporating Energy Awareness in Large Language Model (LLM) Human Evaluations2025-07-17Inverse Reinforcement Learning Meets Large Language Model Post-Training: Basics, Advances, and Opportunities2025-07-17Assay2Mol: large language model-based drug design using BioAssay context2025-07-16Describe Anything Model for Visual Question Answering on Text-rich Images2025-07-16InstructFLIP: Exploring Unified Vision-Language Model for Face Anti-spoofing2025-07-16