TasksSotADatasetsPapersMethodsSubmitAbout
Papers With Code 2

A community resource for machine learning research: papers, code, benchmarks, and state-of-the-art results.

Explore

Notable BenchmarksAll SotADatasetsPapersMethods

Community

Submit ResultsAbout

Data sourced from the PWC Archive (CC-BY-SA 4.0). Built by the community, for the community.

Papers/Actions Speak Louder than Words: Trillion-Parameter Sequen...

Actions Speak Louder than Words: Trillion-Parameter Sequential Transducers for Generative Recommendations

Jiaqi Zhai, Lucy Liao, Xing Liu, Yueming Wang, Rui Li, Xuan Cao, Leon Gao, Zhaojie Gong, Fangda Gu, Michael He, Yinghai Lu, Yu Shi

2024-02-27Recommendation Systems
PaperPDFCodeCodeCodeCodeCodeCodeCodeCodeCodeCodeCodeCode(official)

Abstract

Large-scale recommendation systems are characterized by their reliance on high cardinality, heterogeneous features and the need to handle tens of billions of user actions on a daily basis. Despite being trained on huge volume of data with thousands of features, most Deep Learning Recommendation Models (DLRMs) in industry fail to scale with compute. Inspired by success achieved by Transformers in language and vision domains, we revisit fundamental design choices in recommendation systems. We reformulate recommendation problems as sequential transduction tasks within a generative modeling framework ("Generative Recommenders"), and propose a new architecture, HSTU, designed for high cardinality, non-stationary streaming recommendation data. HSTU outperforms baselines over synthetic and public datasets by up to 65.8% in NDCG, and is 5.3x to 15.2x faster than FlashAttention2-based Transformers on 8192 length sequences. HSTU-based Generative Recommenders, with 1.5 trillion parameters, improve metrics in online A/B tests by 12.4% and have been deployed on multiple surfaces of a large internet platform with billions of users. More importantly, the model quality of Generative Recommenders empirically scales as a power-law of training compute across three orders of magnitude, up to GPT-3/LLaMa-2 scale, which reduces carbon footprint needed for future model developments, and further paves the way for the first foundational models in recommendations.

Results

TaskDatasetMetricValueModel
Recommendation SystemsMovieLens 20MHR@10 (full corpus)0.3556HSTU
Recommendation SystemsMovieLens 20MnDCG@10 (full corpus)0.2098HSTU
Recommendation SystemsMovieLens 1MHR@10 (full corpus)0.3294HSTU
Recommendation SystemsMovieLens 1MNDCG@10 (full corpus)0.1893HSTU
Recommendation SystemsAmazon-BookHR@100.0478HSTU
Recommendation SystemsAmazon-BookHR@500.1082HSTU
Recommendation SystemsAmazon-BookNDCG@100.0262HSTU
Recommendation SystemsAmazon-BookNDCG@500.0393HSTU

Related Papers

IP2: Entity-Guided Interest Probing for Personalized News Recommendation2025-07-18A Reproducibility Study of Product-side Fairness in Bundle Recommendation2025-07-18SGCL: Unifying Self-Supervised and Supervised Learning for Graph Recommendation2025-07-17Similarity-Guided Diffusion for Contrastive Sequential Recommendation2025-07-16Looking for Fairness in Recommender Systems2025-07-16Journalism-Guided Agentic In-Context Learning for News Stance Detection2025-07-15LLM-Stackelberg Games: Conjectural Reasoning Equilibria and Their Applications to Spearphishing2025-07-12When Graph Contrastive Learning Backfires: Spectral Vulnerability and Defense in Recommendation2025-07-10