TasksSotADatasetsPapersMethodsSubmitAbout
Papers With Code 2

A community resource for machine learning research: papers, code, benchmarks, and state-of-the-art results.

Explore

Notable BenchmarksAll SotADatasetsPapersMethods

Community

Submit ResultsAbout

Data sourced from the PWC Archive (CC-BY-SA 4.0). Built by the community, for the community.

Papers/Wide & Deep Learning for Recommender Systems

Wide & Deep Learning for Recommender Systems

Heng-Tze Cheng, Levent Koc, Jeremiah Harmsen, Tal Shaked, Tushar Chandra, Hrishi Aradhye, Glen Anderson, Greg Corrado, Wei Chai, Mustafa Ispir, Rohan Anil, Zakaria Haque, Lichan Hong, Vihan Jain, Xiaobing Liu, Hemal Shah

2016-06-24Feature EngineeringNews RecommendationClick-Through Rate PredictionDeep LearningRecommendation SystemsMemorization
PaperPDFCodeCodeCodeCodeCodeCodeCodeCodeCodeCodeCodeCodeCodeCodeCodeCodeCodeCodeCodeCodeCodeCodeCodeCodeCodeCodeCodeCodeCodeCodeCodeCodeCodeCodeCodeCodeCodeCodeCode

Abstract

Generalized linear models with nonlinear feature transformations are widely used for large-scale regression and classification problems with sparse inputs. Memorization of feature interactions through a wide set of cross-product feature transformations are effective and interpretable, while generalization requires more feature engineering effort. With less feature engineering, deep neural networks can generalize better to unseen feature combinations through low-dimensional dense embeddings learned for the sparse features. However, deep neural networks with embeddings can over-generalize and recommend less relevant items when the user-item interactions are sparse and high-rank. In this paper, we present Wide & Deep learning---jointly trained wide linear models and deep neural networks---to combine the benefits of memorization and generalization for recommender systems. We productionized and evaluated the system on Google Play, a commercial mobile app store with over one billion active users and over one million apps. Online experiment results show that Wide & Deep significantly increased app acquisitions compared with wide-only and deep-only models. We have also open-sourced our implementation in TensorFlow.

Results

TaskDatasetMetricValueModel
Click-Through Rate PredictionBing NewsAUC0.8377Wide & Deep
Click-Through Rate PredictionBing NewsLog Loss0.2668Wide & Deep
Click-Through Rate PredictionCompany*AUC0.8673Wide & Deep (LR & DNN)
Click-Through Rate PredictionCompany*Log Loss0.02634Wide & Deep (LR & DNN)
Click-Through Rate PredictionCompany*AUC0.8661Wide & Deep (FM & DNN)
Click-Through Rate PredictionCompany*Log Loss0.0264Wide & Deep (FM & DNN)
Click-Through Rate PredictionDianpingAUC0.8361Wide & Deep
Click-Through Rate PredictionDianpingLog Loss0.3364Wide & Deep
Click-Through Rate PredictionMovieLens 20MAUC0.7304Wide & Deep
Click-Through Rate PredictionCriteoAUC0.7981Wide&Deep
Click-Through Rate PredictionCriteoLog Loss0.46772Wide&Deep
Click-Through Rate PredictionAmazonAUC0.8637Wide & Deep

Related Papers

IP2: Entity-Guided Interest Probing for Personalized News Recommendation2025-07-18Automatic Classification and Segmentation of Tunnel Cracks Based on Deep Learning and Visual Explanations2025-07-18A Reproducibility Study of Product-side Fairness in Bundle Recommendation2025-07-18SGCL: Unifying Self-Supervised and Supervised Learning for Graph Recommendation2025-07-17A Survey of Deep Learning for Geometry Problem Solving2025-07-16Similarity-Guided Diffusion for Contrastive Sequential Recommendation2025-07-16Looking for Fairness in Recommender Systems2025-07-16Generative Click-through Rate Prediction with Applications to Search Advertising2025-07-15