TasksSotADatasetsPapersMethodsSubmitAbout
Papers With Code 2

A community resource for machine learning research: papers, code, benchmarks, and state-of-the-art results.

Explore

Notable BenchmarksAll SotADatasetsPapersMethods

Community

Submit ResultsAbout

Data sourced from the PWC Archive (CC-BY-SA 4.0). Built by the community, for the community.

Papers/Self-Attentive Sequential Recommendation

Self-Attentive Sequential Recommendation

Wang-Cheng Kang, Julian McAuley

2018-08-20Sequential RecommendationRecommendation Systems
PaperPDFCodeCodeCodeCodeCodeCodeCodeCode

Abstract

Sequential dynamics are a key feature of many modern recommender systems, which seek to capture the `context' of users' activities on the basis of actions they have performed recently. To capture such patterns, two approaches have proliferated: Markov Chains (MCs) and Recurrent Neural Networks (RNNs). Markov Chains assume that a user's next action can be predicted on the basis of just their last (or last few) actions, while RNNs in principle allow for longer-term semantics to be uncovered. Generally speaking, MC-based methods perform best in extremely sparse datasets, where model parsimony is critical, while RNNs perform better in denser datasets where higher model complexity is affordable. The goal of our work is to balance these two goals, by proposing a self-attention based sequential model (SASRec) that allows us to capture long-term semantics (like an RNN), but, using an attention mechanism, makes its predictions based on relatively few actions (like an MC). At each time step, SASRec seeks to identify which items are `relevant' from a user's action history, and use them to predict the next item. Extensive empirical studies show that our method outperforms various state-of-the-art sequential models (including MC/CNN/RNN-based approaches) on both sparse and dense datasets. Moreover, the model is an order of magnitude more efficient than comparable CNN/RNN-based models. Visualizations on attention weights also show how our model adaptively handles datasets with various density, and uncovers meaningful patterns in activity sequences.

Results

TaskDatasetMetricValueModel
Recommendation SystemsAmazon BeautyHit@100.4854SASRec
Recommendation SystemsAmazon BeautynDCG@100.3219SASRec
Recommendation SystemsMovieLens 20MHR@10 (full corpus)0.2889SASRec
Recommendation SystemsMovieLens 20MnDCG@10 (full corpus)0.1621SASRec
Recommendation SystemsSteamHit@100.8729SASRec
Recommendation SystemsSteamnDCG@100.6306SASRec
Recommendation SystemsMovieLens 1MHR@100.8245SASRec
Recommendation SystemsMovieLens 1MHR@10 (full corpus)0.2821SASRec
Recommendation SystemsMovieLens 1MNDCG@10 (full corpus)0.1603SASRec
Recommendation SystemsMovieLens 1MnDCG@100.5905SASRec
Recommendation SystemsAmazon GamesHit@100.741SASRec
Recommendation SystemsAmazon GamesnDCG@100.536SASRec
Recommendation SystemsAmazon-BookHR@100.0306SASRec
Recommendation SystemsAmazon-BookHR@500.0754SASRec
Recommendation SystemsAmazon-BookNDCG@100.0164SASRec
Recommendation SystemsAmazon-BookNDCG@500.026SASRec
Recommendation SystemsMovieLens 1MHR@100.2137SASRec
Recommendation SystemsMovieLens 1MHR@10 (99 Neg. Samples)0.7904SASRec
Recommendation SystemsMovieLens 1MHR@200.3245SASRec
Recommendation SystemsMovieLens 1MHR@50.1374SASRec
Recommendation SystemsMovieLens 1MHR@5 (99 Neg. Samples)0.6874SASRec
Recommendation SystemsMovieLens 1MMRR (99 Neg. Samples)0.502SASRec
Recommendation SystemsMovieLens 1MNDCG@100.1116SASRec
Recommendation SystemsMovieLens 1MNDCG@10 (99 Neg. Samples)0.5642SASRec
Recommendation SystemsMovieLens 1MNDCG@200.1395SASRec
Recommendation SystemsMovieLens 1MNDCG@50.0873SASRec
Recommendation SystemsMovieLens 1MNDCG@5 (99 Neg. Samples)0.5308SASRec

Related Papers

IP2: Entity-Guided Interest Probing for Personalized News Recommendation2025-07-18A Reproducibility Study of Product-side Fairness in Bundle Recommendation2025-07-18SGCL: Unifying Self-Supervised and Supervised Learning for Graph Recommendation2025-07-17Similarity-Guided Diffusion for Contrastive Sequential Recommendation2025-07-16Looking for Fairness in Recommender Systems2025-07-16Journalism-Guided Agentic In-Context Learning for News Stance Detection2025-07-15LLM-Stackelberg Games: Conjectural Reasoning Equilibria and Their Applications to Spearphishing2025-07-12When Graph Contrastive Learning Backfires: Spectral Vulnerability and Defense in Recommendation2025-07-10