TasksSotADatasetsPapersMethodsSubmitAbout
Papers With Code 2

A community resource for machine learning research: papers, code, benchmarks, and state-of-the-art results.

Explore

Notable BenchmarksAll SotADatasetsPapersMethods

Community

Submit ResultsAbout

Data sourced from the PWC Archive (CC-BY-SA 4.0). Built by the community, for the community.

Papers/Infinite Recommendation Networks: A Data-Centric Approach

Infinite Recommendation Networks: A Data-Centric Approach

Noveen Sachdeva, Mehak Preet Dhaliwal, Carole-Jean Wu, Julian McAuley

2022-06-03Information RetrievalRecommendation Systems
PaperPDFCode(official)CodeCodeCodeCode(official)

Abstract

We leverage the Neural Tangent Kernel and its equivalence to training infinitely-wide neural networks to devise $\infty$-AE: an autoencoder with infinitely-wide bottleneck layers. The outcome is a highly expressive yet simplistic recommendation model with a single hyper-parameter and a closed-form solution. Leveraging $\infty$-AE's simplicity, we also develop Distill-CF for synthesizing tiny, high-fidelity data summaries which distill the most important knowledge from the extremely large and sparse user-item interaction matrix for efficient and accurate subsequent data-usage like model training, inference, architecture search, etc. This takes a data-centric approach to recommendation, where we aim to improve the quality of logged user-feedback data for subsequent modeling, independent of the learning algorithm. We particularly utilize the concept of differentiable Gumbel-sampling to handle the inherent data heterogeneity, sparsity, and semi-structuredness, while being scalable to datasets with hundreds of millions of user-item interactions. Both of our proposed approaches significantly outperform their respective state-of-the-art and when used together, we observe 96-105% of $\infty$-AE's performance on the full dataset with as little as 0.1% of the original dataset size, leading us to explore the counter-intuitive question: Is more data what you need for better recommendation?

Results

TaskDatasetMetricValueModel
Recommendation SystemsNetflixAUC0.9728∞-AE
Recommendation SystemsNetflixPSP@100.0375∞-AE
Recommendation SystemsNetflixRecall@100.2969∞-AE
Recommendation SystemsNetflixRecall@1000.5088∞-AE
Recommendation SystemsNetflixnDCG@100.3059∞-AE
Recommendation SystemsNetflixnDCG@1000.3659∞-AE
Recommendation SystemsMovieLens 1MHR@100.3151∞-AE
Recommendation SystemsMovieLens 1MHR@1000.6005∞-AE
Recommendation SystemsMovieLens 1MPSP@100.0322∞-AE
Recommendation SystemsMovieLens 1MnDCG@100.3282∞-AE
Recommendation SystemsMovieLens 1MnDCG@1000.4253∞-AE
Recommendation SystemsDoubanAUC0.9523∞-AE
Recommendation SystemsDoubanHR@100.2356∞-AE
Recommendation SystemsDoubanHR@1000.2837∞-AE
Recommendation SystemsDoubanPSP@100.0128∞-AE
Recommendation SystemsDoubannDCG@100.2494∞-AE
Recommendation SystemsDoubannDCG@1000.2326∞-AE

Related Papers

IP2: Entity-Guided Interest Probing for Personalized News Recommendation2025-07-18A Reproducibility Study of Product-side Fairness in Bundle Recommendation2025-07-18Overview of the TalentCLEF 2025: Skill and Job Title Intelligence for Human Capital Management2025-07-17SGCL: Unifying Self-Supervised and Supervised Learning for Graph Recommendation2025-07-17Similarity-Guided Diffusion for Contrastive Sequential Recommendation2025-07-16Looking for Fairness in Recommender Systems2025-07-16From Chaos to Automation: Enabling the Use of Unstructured Data for Robotic Process Automation2025-07-15Journalism-Guided Agentic In-Context Learning for News Stance Detection2025-07-15