Description
context2vec is an unsupervised model for learning generic context embedding of wide sentential contexts, using a bidirectional LSTM. A large plain text corpora is trained on to learn a neural model that embeds entire sentential contexts and target words in the same low-dimensional space, which is optimized to reflect inter-dependencies between targets and their entire sentential context as a whole.
In contrast to word2vec that use context modeling mostly internally and considers the target word embeddings as their main output, the focus of context2vec is the context representation. context2vec achieves its objective by assigning similar embeddings to sentential contexts and their associated target words.
Papers Using This Method
Always Keep your Target in Mind: Studying Semantics and Improving Performance of Neural Lexical Substitution2022-06-07Token Level Identification of Multiword Expressions Using Contextual Information2020-07-01A Comparative Study of Lexical Substitution Approaches based on Neural Language Models2020-05-29Word Usage Similarity Estimation with Sentence Representations and Automatic Substitutes2019-05-20Lexical Substitution for Evaluating Compositional Distributional Models2018-06-01context2vec: Learning Generic Context Embedding with Bidirectional LSTM2016-08-01