TasksSotADatasetsPapersMethodsSubmitAbout
Papers With Code 2

A community resource for machine learning research: papers, code, benchmarks, and state-of-the-art results.

Explore

Notable BenchmarksAll SotADatasetsPapersMethods

Community

Submit ResultsAbout

Data sourced from the PWC Archive (CC-BY-SA 4.0). Built by the community, for the community.

Papers/Dual-View Disentangled Multi-Intent Learning for Enhanced ...

Dual-View Disentangled Multi-Intent Learning for Enhanced Collaborative Filtering

Shanfan Zhang, Yongyi Lin, Yuan Rao, Chenlong Zhang

2025-06-13DisentanglementCollaborative Filtering
PaperPDFCode(official)

Abstract

Disentangling user intentions from implicit feedback has become a promising strategy to enhance recommendation accuracy and interpretability. Prior methods often model intentions independently and lack explicit supervision, thus failing to capture the joint semantics that drive user-item interactions. To address these limitations, we propose DMICF, a unified framework that explicitly models interaction-level intent alignment while leveraging structural signals from both user and item perspectives. DMICF adopts a dual-view architecture that jointly encodes user-item interaction graphs from both sides, enabling bidirectional information fusion. This design enhances robustness under data sparsity by allowing the structural redundancy of one view to compensate for the limitations of the other. To model fine-grained user-item compatibility, DMICF introduces an intent interaction encoder that performs sub-intent alignment within each view, uncovering shared semantic structures that underlie user decisions. This localized alignment enables adaptive refinement of intent embeddings based on interaction context, thus improving the model's generalization and expressiveness, particularly in long-tail scenarios. Furthermore, DMICF integrates an intent-aware scoring mechanism that aggregates compatibility signals from matched intent pairs across user and item subspaces, enabling personalized prediction grounded in semantic congruence rather than entangled representations. To facilitate semantic disentanglement, we design a discriminative training signal via multi-negative sampling and softmax normalization, which pulls together semantically aligned intent pairs while pushing apart irrelevant or noisy ones. Extensive experiments demonstrate that DMICF consistently delivers robust performance across datasets with diverse interaction distributions.

Related Papers

CSD-VAR: Content-Style Decomposition in Visual Autoregressive Models2025-07-18SGCL: Unifying Self-Supervised and Supervised Learning for Graph Recommendation2025-07-17Towards Imperceptible JPEG Image Hiding: Multi-range Representations-driven Adversarial Stego Generation2025-07-11NLGCL: Naturally Existing Neighbor Layers Graph Contrastive Learning for Recommendation2025-07-10Generative Head-Mounted Camera Captures for Photorealistic Avatars2025-07-08Reflections Unlock: Geometry-Aware Reflection Disentanglement in 3D Gaussian Splatting for Photorealistic Scenes Rendering2025-07-08From ID-based to ID-free: Rethinking ID Effectiveness in Multimodal Collaborative Filtering Recommendation2025-07-08Bridging Domain Generalization to Multimodal Domain Generalization via Unified Representations2025-07-04