TasksSotADatasetsPapersMethodsSubmitAbout
Papers With Code 2

A community resource for machine learning research: papers, code, benchmarks, and state-of-the-art results.

Explore

Notable BenchmarksAll SotADatasetsPapersMethods

Community

Submit ResultsAbout

Data sourced from the PWC Archive (CC-BY-SA 4.0). Built by the community, for the community.

Papers/Mixing Dirichlet Topic Models and Word Embeddings to Make ...

Mixing Dirichlet Topic Models and Word Embeddings to Make lda2vec

Christopher E Moody

2016-05-06Word EmbeddingsTopic Models
PaperPDFCodeCode(official)CodeCodeCode

Abstract

Distributed dense word vectors have been shown to be effective at capturing token-level semantic and syntactic regularities in language, while topic models can form interpretable representations over documents. In this work, we describe lda2vec, a model that learns dense word vectors jointly with Dirichlet-distributed latent document-level mixtures of topic vectors. In contrast to continuous dense document representations, this formulation produces sparse, interpretable document mixtures through a non-negative simplex constraint. Our method is simple to incorporate into existing automatic differentiation frameworks and allows for unsupervised document representations geared for use by scientists while simultaneously learning word vectors and the linear relationships between them.

Related Papers

Speak2Sign3D: A Multi-modal Pipeline for English Speech to American Sign Language Animation2025-07-09Computational Detection of Intertextual Parallels in Biblical Hebrew: A Benchmark Study Using Transformer-Based Language Models2025-06-30Narrative Shift Detection: A Hybrid Approach of Dynamic Topic Models and Large Language Models2025-06-25Including Semantic Information via Word Embeddings for Skeleton-based Action Recognition2025-06-23Low-resource keyword spotting using contrastively trained transformer acoustic word embeddings2025-06-21Characterizing Linguistic Shifts in Croatian News via Diachronic Word Embeddings2025-06-16Learning Obfuscations Of LLM Embedding Sequences: Stained Glass Transform2025-06-11Recommender systems, stigmergy, and the tyranny of popularity2025-06-06