TasksSotADatasetsPapersMethodsSubmitAbout
Papers With Code 2

A community resource for machine learning research: papers, code, benchmarks, and state-of-the-art results.

Explore

Notable BenchmarksAll SotADatasetsPapersMethods

Community

Submit ResultsAbout

Data sourced from the PWC Archive (CC-BY-SA 4.0). Built by the community, for the community.

Papers/Heterophily-Aware Fair Recommendation using Graph Convolut...

Heterophily-Aware Fair Recommendation using Graph Convolutional Networks

Nemat Gholinejad, Mostafa Haghir Chehreghani

2024-01-31FairnessRecommendation Systems
PaperPDFCode(official)

Abstract

In recent years, graph neural networks (GNNs) have become a popular tool to improve the accuracy and performance of recommender systems. Modern recommender systems are not only designed to serve end users, but also to benefit other participants, such as items and item providers. These participants may have different or conflicting goals and interests, which raises the need for fairness and popularity bias considerations. GNN-based recommendation methods also face the challenges of unfairness and popularity bias, and their normalization and aggregation processes suffer from these challenges. In this paper, we propose a fair GNN-based recommender system, called HetroFair, to improve item-side fairness. HetroFair uses two separate components to generate fairness-aware embeddings: i) Fairness-aware attention, which incorporates the dot product in the normalization process of GNNs to decrease the effect of nodes' degrees. ii) Heterophily feature weighting, to assign distinct weights to different features during the aggregation process. To evaluate the effectiveness of HetroFair, we conduct extensive experiments over six real-world datasets. Our experimental results reveal that HetroFair not only alleviates unfairness and popularity bias on the item side but also achieves superior accuracy on the user side. Our implementation is publicly available at https://github.com/NematGH/HetroFair.

Results

TaskDatasetMetricValueModel
Recommendation SystemsAmazon-BeautyMAP@200.1364HetroFair
Recommendation SystemsAmazon-BeautyMRR@200.2824HetroFair
Recommendation SystemsAmazon-BeautyNDCG@200.2308HetroFair
Recommendation SystemsAmazon-CDsMAP@200.0747HetroFair
Recommendation SystemsAmazon-CDsMRR@200.2017HetroFair
Recommendation SystemsAmazon-CDsNDCG@200.1449HetroFair
Recommendation SystemsAmazon-MoviesMAP@200.0365HetroFair
Recommendation SystemsAmazon-MoviesMRR@200.1093HetroFair
Recommendation SystemsAmazon-MoviesNDCG@200.0777HetroFair
Recommendation SystemsAmazon-ElectronicsMAP@200.0256HetroFair
Recommendation SystemsAmazon-ElectronicsMRR@200.0733HetroFair
Recommendation SystemsAmazon-ElectronicsNDCG@200.0525HetroFair
Recommendation SystemsAmazon-HealthMAP@200.0656HetroFair
Recommendation SystemsAmazon-HealthMRR@200.2112HetroFair
Recommendation SystemsAmazon-HealthNDCG@200.1334HetroFair
Recommendation SystemsEpinionsMAP@200.0379HetroFair
Recommendation SystemsEpinionsMRR@200.1525HetroFair
Recommendation SystemsEpinionsNDCG@200.0895HetroFair

Related Papers

A Reproducibility Study of Product-side Fairness in Bundle Recommendation2025-07-18IP2: Entity-Guided Interest Probing for Personalized News Recommendation2025-07-18FedGA: A Fair Federated Learning Framework Based on the Gini Coefficient2025-07-17SGCL: Unifying Self-Supervised and Supervised Learning for Graph Recommendation2025-07-17Looking for Fairness in Recommender Systems2025-07-16FADE: Adversarial Concept Erasure in Flow Models2025-07-16Similarity-Guided Diffusion for Contrastive Sequential Recommendation2025-07-16Fairness-Aware Grouping for Continuous Sensitive Variables: Application for Debiasing Face Analysis with respect to Skin Tone2025-07-15