TasksSotADatasetsPapersMethodsSubmitAbout
Papers With Code 2

A community resource for machine learning research: papers, code, benchmarks, and state-of-the-art results.

Explore

Notable BenchmarksAll SotADatasetsPapersMethods

Community

Submit ResultsAbout

Data sourced from the PWC Archive (CC-BY-SA 4.0). Built by the community, for the community.

Papers/GRCN: Graph-Refined Convolutional Network for Multimedia R...

GRCN: Graph-Refined Convolutional Network for Multimedia Recommendation with Implicit Feedback

Wei Yinwei, Wang Xiang, Nie Liqiang, He Xiangnan, Chua Tat-Seng

2021-11-03Multi-modal Recommendation
PaperPDFCode(official)

Abstract

Reorganizing implicit feedback of users as a user-item interaction graph facilitates the applications of graph convolutional networks (GCNs) in recommendation tasks. In the interaction graph, edges between user and item nodes function as the main element of GCNs to perform information propagation and generate informative representations. Nevertheless, an underlying challenge lies in the quality of interaction graph, since observed interactions with less-interested items occur in implicit feedback (say, a user views micro-videos accidentally). This means that the neighborhoods involved with such false-positive edges will be influenced negatively and the signal on user preference can be severely contaminated. However, existing GCN-based recommender models leave such challenge under-explored, resulting in suboptimal representations and performance. In this work, we focus on adaptively refining the structure of interaction graph to discover and prune potential false-positive edges. Towards this end, we devise a new GCN-based recommender model, \emph{Graph-Refined Convolutional Network} (GRCN), which adjusts the structure of interaction graph adaptively based on status of model training, instead of remaining the fixed structure. In particular, a graph refining layer is designed to identify the noisy edges with the high confidence of being false-positive interactions, and consequently prune them in a soft manner. We then apply a graph convolutional layer on the refined graph to distill informative signals on user preference. Through extensive experiments on three datasets for micro-video recommendation, we validate the rationality and effectiveness of our GRCN. Further in-depth analysis presents how the refined graph benefits the GCN-based recommender model.

Results

TaskDatasetMetricValueModel
Recommendation SystemsAmazon BabyNDCG@200.0358GRCN
Recommendation SystemsAmazon SportsNGCG@200.0413GRCN
Recommendation SystemsAmazon ClothingNDCG@200.0284GRCN

Related Papers

RAG-VisualRec: An Open Resource for Vision- and Text-Enhanced Retrieval-Augmented Generation in Recommendation2025-06-25Teach Me How to Denoise: A Universal Framework for Denoising Multi-modal Recommender Systems via Guided Calibration2025-04-19MDE: Modality Discrimination Enhancement for Multi-modal Recommendation2025-02-08Generating Negative Samples for Multi-Modal Recommendation2025-01-25Modality-Independent Graph Neural Networks with Global Transformers for Multimodal Recommendation2024-12-18QARM: Quantitative Alignment Multi-Modal Recommendation at Kuaishou2024-11-18Harnessing Multimodal Large Language Models for Multimodal Sequential Recommendation2024-08-19A Unified Graph Transformer for Overcoming Isolations in Multi-modal Recommendation2024-07-29