TasksSotADatasetsPapersMethodsSubmitAbout
Papers With Code 2

A community resource for machine learning research: papers, code, benchmarks, and state-of-the-art results.

Explore

Notable BenchmarksAll SotADatasetsPapersMethods

Community

Submit ResultsAbout

Data sourced from the PWC Archive (CC-BY-SA 4.0). Built by the community, for the community.

Papers/Tying Word Vectors and Word Classifiers: A Loss Framework ...

Tying Word Vectors and Word Classifiers: A Loss Framework for Language Modeling

Hakan Inan, Khashayar Khosravi, Richard Socher

2016-11-04General ClassificationLanguage Modelling
PaperPDFCodeCodeCodeCodeCode

Abstract

Recurrent neural networks have been very successful at predicting sequences of words in tasks such as language modeling. However, all such models are based on the conventional classification framework, where the model is trained against one-hot targets, and each word is represented both as an input and as an output in isolation. This causes inefficiencies in learning both in terms of utilizing all of the information and in terms of the number of parameters needed to train. We introduce a novel theoretical framework that facilitates better learning in language modeling, and show that our framework leads to tying together the input embedding and the output projection matrices, greatly reducing the number of trainable variables. Our framework leads to state of the art performance on the Penn Treebank with a variety of network models.

Results

TaskDatasetMetricValueModel
Language ModellingPenn Treebank (Word Level)Test perplexity66Inan et al. (2016) - Variational RHN
Language ModellingPenn Treebank (Word Level)Validation perplexity68.1Inan et al. (2016) - Variational RHN
Language ModellingWikiText-2Test perplexity87Inan et al. (2016) - Variational LSTM (tied) (h=650) + augmented loss
Language ModellingWikiText-2Validation perplexity91.5Inan et al. (2016) - Variational LSTM (tied) (h=650) + augmented loss
Language ModellingWikiText-2Test perplexity87.7Inan et al. (2016) - Variational LSTM (tied) (h=650)
Language ModellingWikiText-2Validation perplexity92.3Inan et al. (2016) - Variational LSTM (tied) (h=650)

Related Papers

Visual-Language Model Knowledge Distillation Method for Image Quality Assessment2025-07-21Making Language Model a Hierarchical Classifier and Generator2025-07-17VisionThink: Smart and Efficient Vision Language Model via Reinforcement Learning2025-07-17The Generative Energy Arena (GEA): Incorporating Energy Awareness in Large Language Model (LLM) Human Evaluations2025-07-17Inverse Reinforcement Learning Meets Large Language Model Post-Training: Basics, Advances, and Opportunities2025-07-17Assay2Mol: large language model-based drug design using BioAssay context2025-07-16Describe Anything Model for Visual Question Answering on Text-rich Images2025-07-16InstructFLIP: Exploring Unified Vision-Language Model for Face Anti-spoofing2025-07-16