TasksSotADatasetsPapersMethodsSubmitAbout
Papers With Code 2

A community resource for machine learning research: papers, code, benchmarks, and state-of-the-art results.

Explore

Notable BenchmarksAll SotADatasetsPapersMethods

Community

Submit ResultsAbout

Data sourced from the PWC Archive (CC-BY-SA 4.0). Built by the community, for the community.

Papers/AdaCrossNet: Adaptive Dynamic Loss Weighting for Cross-Mod...

AdaCrossNet: Adaptive Dynamic Loss Weighting for Cross-Modal Contrastive Point Cloud Learning

Oddy Virgantara Putra, Kohichi Ogata, Eko Mulyanto Yuniarno, Mauridhi Hery Purnomo

2025-01-02International Journal of Intelligent Engineering and Systems 2025 13D Point Cloud Linear ClassificationSelf-Supervised LearningContrastive Learning3D Part Segmentation3D Point Cloud Classification
PaperPDFCode

Abstract

Manual annotation of large-scale point cloud datasets is laborious due to their irregular structure. While cross-modal contrastive learning methods such as CrossPoint and CrossNet have progressed in utilizing multimodal data for self-supervised learning, they still suffer from instability during training caused by the static weighting of intra-modal (IM) and cross-modal (CM) losses. These static weights fail to account for the varying convergence rates of different modalities. We propose AdaCrossNet, a novel self-supervised learning framework for point cloud understanding that utilizes a dynamic weight adjustment mechanism for IM and CM contrastive learning. AdaCrossNet learns representations by simultaneously enhancing the alignment between 3-D point clouds and their associated 2D- rendered images within a common latent space. Our dynamic weight adjustment mechanism adaptively balances the contributions of IM and CM losses during training, guided by the convergence behavior of each modality. To ensure stability in the training process, we employ an exponentially weighted moving average (EWMA) to smooth the weight updates. We experimented with benchmark datasets, ModelNet40, ShapeNetPart, and ScanObjectNN. The results demonstrate that AdaCrossNet achieves superiority over other methods, with 91.4% accuracy on the ModelNet40 classification task. While on the segmentation task, AdaCrossNet achieved the mIoU score of 85.1% on the ShapeNetPart segmentation task. Additionally, AdaCrossNet, when combined with the DGCNN backbone, showed significant improvements in the ScanObjectNN dataset with 82.1% accuracy. Our method boosts training efficiency while increasing the generalizability of the learned representations across downstream tasks

Related Papers

A Semi-Supervised Learning Method for the Identification of Bad Exposures in Large Imaging Surveys2025-07-17SemCSE: Semantic Contrastive Sentence Embeddings Using LLM-Generated Summaries For Scientific Abstracts2025-07-17HapticCap: A Multimodal Dataset and Task for Understanding User Experience of Vibration Haptic Signals2025-07-17Overview of the TalentCLEF 2025: Skill and Job Title Intelligence for Human Capital Management2025-07-17SGCL: Unifying Self-Supervised and Supervised Learning for Graph Recommendation2025-07-17Similarity-Guided Diffusion for Contrastive Sequential Recommendation2025-07-16LLM-Driven Dual-Level Multi-Interest Modeling for Recommendation2025-07-15Latent Space Consistency for Sparse-View CT Reconstruction2025-07-15