TasksSotADatasetsPapersMethodsSubmitAbout
Papers With Code 2

A community resource for machine learning research: papers, code, benchmarks, and state-of-the-art results.

Explore

Notable BenchmarksAll SotADatasetsPapersMethods

Community

Submit ResultsAbout

Data sourced from the PWC Archive (CC-BY-SA 4.0). Built by the community, for the community.

Methods/CoVe

CoVe

Contextual Word Vectors

Natural Language ProcessingIntroduced 200013 papers
Source Paper

Description

CoVe, or Contextualized Word Vectors, uses a deep LSTM encoder from an attentional sequence-to-sequence model trained for machine translation to contextualize word vectors. CoVe\text{CoVe}CoVe word embeddings are therefore a function of the entire input sequence. These word embeddings can then be used in downstream tasks by concatenating them with GloVe\text{GloVe}GloVe embeddings:

v=[GloVe(x),CoVe(x)] v = \left[\text{GloVe}\left(x\right), \text{CoVe}\left(x\right)\right]v=[GloVe(x),CoVe(x)]

and then feeding these in as features for the task-specific models.

Papers Using This Method

CoVE: Compressed Vocabulary Expansion Makes Better LLM-based Recommender Systems2025-06-24COVE: COntext and VEracity prediction for out-of-context images2025-02-03On the Role of Surrogates in Conformal Inference of Individual Causal Effects2024-12-16COVE: Unleashing the Diffusion Feature Correspondence for Consistent Video Editing2024-06-13A Comprehensive Survey of Hallucination Mitigation Techniques in Large Language Models2024-01-02Chain-of-Verification Reduces Hallucination in Large Language Models2023-09-20Learning Category Trees for ID-Based Recommendation: Exploring the Power of Differentiable Vector Quantization2023-08-31FastFusionNet: New State-of-the-Art for DAWNBench SQuAD2019-02-28Language Modeling Teaches You More than Translation Does: Lessons Learned Through Auxiliary Syntactic Task Analysis2018-11-01Language Modeling Teaches You More Syntax than Translation Does: Lessons Learned Through Auxiliary Task Analysis2018-09-26Improving Matching Models with Hierarchical Contextualized Representations for Multi-turn Response Selection2018-08-22Jiangnan at SemEval-2018 Task 11: Deep Neural Network with Attention Method for Machine Comprehension Task2018-06-01Learned in Translation: Contextualized Word Vectors2017-08-01