TasksSotADatasetsPapersMethodsSubmitAbout
Papers With Code 2

A community resource for machine learning research: papers, code, benchmarks, and state-of-the-art results.

Explore

Notable BenchmarksAll SotADatasetsPapersMethods

Community

Submit ResultsAbout

Data sourced from the PWC Archive (CC-BY-SA 4.0). Built by the community, for the community.

Papers/Variational Autoencoders for Collaborative Filtering

Variational Autoencoders for Collaborative Filtering

Dawen Liang, Rahul G. Krishnan, Matthew D. Hoffman, Tony Jebara

2018-02-16Collaborative FilteringBayesian InferenceRecommendation SystemsLanguage Modelling
PaperPDFCodeCodeCodeCodeCodeCodeCodeCodeCodeCodeCodeCodeCodeCode(official)CodeCodeCodeCode

Abstract

We extend variational autoencoders (VAEs) to collaborative filtering for implicit feedback. This non-linear probabilistic model enables us to go beyond the limited modeling capacity of linear factor models which still largely dominate collaborative filtering research.We introduce a generative model with multinomial likelihood and use Bayesian inference for parameter estimation. Despite widespread use in language modeling and economics, the multinomial likelihood receives less attention in the recommender systems literature. We introduce a different regularization parameter for the learning objective, which proves to be crucial for achieving competitive performance. Remarkably, there is an efficient way to tune the parameter using annealing. The resulting model and learning algorithm has information-theoretic connections to maximum entropy discrimination and the information bottleneck principle. Empirically, we show that the proposed approach significantly outperforms several state-of-the-art baselines, including two recently-proposed neural network approaches, on several real-world datasets. We also provide extended experiments comparing the multinomial likelihood with other commonly used likelihood functions in the latent factor collaborative filtering literature and show favorable results. Finally, we identify the pros and cons of employing a principled Bayesian inference approach and characterize settings where it provides the most significant improvements.

Results

TaskDatasetMetricValueModel
Recommendation SystemsMovieLens 20MRecall@200.395Mult-VAE PR
Recommendation SystemsMovieLens 20MRecall@500.537Mult-VAE PR
Recommendation SystemsMovieLens 20MnDCG@1000.426Mult-VAE PR
Recommendation SystemsMovieLens 20MRecall@200.387Mult-DAE
Recommendation SystemsMovieLens 20MRecall@500.524Mult-DAE
Recommendation SystemsMovieLens 20MnDCG@1000.419Mult-DAE
Recommendation SystemsMillion Song DatasetRecall@200.266Mult-VAE PR
Recommendation SystemsMillion Song DatasetRecall@500.364Mult-VAE PR
Recommendation SystemsMillion Song DatasetnDCG@1000.316Mult-VAE PR
Recommendation SystemsMillion Song DatasetRecall@200.266Mult-DAE
Recommendation SystemsMillion Song DatasetRecall@500.363Mult-DAE
Recommendation SystemsMillion Song DatasetnDCG@1000.313Mult-DAE
Recommendation SystemsNetflixRecall@200.351Mult-VAE PR
Recommendation SystemsNetflixRecall@500.444Mult-VAE PR
Recommendation SystemsNetflixnDCG@1000.386Mult-VAE PR
Recommendation SystemsNetflixRecall@200.344Mult-DAE
Recommendation SystemsNetflixRecall@500.438Mult-DAE
Recommendation SystemsNetflixnDCG@1000.38Mult-DAE

Related Papers

Visual-Language Model Knowledge Distillation Method for Image Quality Assessment2025-07-21IP2: Entity-Guided Interest Probing for Personalized News Recommendation2025-07-18A Reproducibility Study of Product-side Fairness in Bundle Recommendation2025-07-18SGCL: Unifying Self-Supervised and Supervised Learning for Graph Recommendation2025-07-17Making Language Model a Hierarchical Classifier and Generator2025-07-17VisionThink: Smart and Efficient Vision Language Model via Reinforcement Learning2025-07-17The Generative Energy Arena (GEA): Incorporating Energy Awareness in Large Language Model (LLM) Human Evaluations2025-07-17Inverse Reinforcement Learning Meets Large Language Model Post-Training: Basics, Advances, and Opportunities2025-07-17