TasksSotADatasetsPapersMethodsSubmitAbout
Papers With Code 2

A community resource for machine learning research: papers, code, benchmarks, and state-of-the-art results.

Explore

Notable BenchmarksAll SotADatasetsPapersMethods

Community

Submit ResultsAbout

Data sourced from the PWC Archive (CC-BY-SA 4.0). Built by the community, for the community.

Papers/Let Images Give You More:Point Cloud Cross-Modal Training ...

Let Images Give You More:Point Cloud Cross-Modal Training for Shape Analysis

Xu Yan, Heshen Zhan, Chaoda Zheng, Jiantao Gao, Ruimao Zhang, Shuguang Cui, Zhen Li

2022-10-09Representation LearningKnowledge Distillation3D Point Cloud Classification
PaperPDFCodeCode(official)

Abstract

Although recent point cloud analysis achieves impressive progress, the paradigm of representation learning from a single modality gradually meets its bottleneck. In this work, we take a step towards more discriminative 3D point cloud representation by fully taking advantages of images which inherently contain richer appearance information, e.g., texture, color, and shade. Specifically, this paper introduces a simple but effective point cloud cross-modality training (PointCMT) strategy, which utilizes view-images, i.e., rendered or projected 2D images of the 3D object, to boost point cloud analysis. In practice, to effectively acquire auxiliary knowledge from view images, we develop a teacher-student framework and formulate the cross modal learning as a knowledge distillation problem. PointCMT eliminates the distribution discrepancy between different modalities through novel feature and classifier enhancement criteria and avoids potential negative transfer effectively. Note that PointCMT effectively improves the point-only representation without architecture modification. Sufficient experiments verify significant gains on various datasets using appealing backbones, i.e., equipped with PointCMT, PointNet++ and PointMLP achieve state-of-the-art performance on two benchmarks, i.e., 94.4% and 86.7% accuracy on ModelNet40 and ScanObjectNN, respectively. Code will be made available at https://github.com/ZhanHeshen/PointCMT.

Results

TaskDatasetMetricValueModel
Shape Representation Of 3D Point CloudsScanObjectNNMean Accuracy84.8PointCMT
Shape Representation Of 3D Point CloudsScanObjectNNOverall Accuracy86.7PointCMT
Shape Representation Of 3D Point CloudsModelNet40Mean Accuracy91.2PointNet2+PointCMT
Shape Representation Of 3D Point CloudsModelNet40Overall Accuracy94.4PointNet2+PointCMT
3D Point Cloud ClassificationScanObjectNNMean Accuracy84.8PointCMT
3D Point Cloud ClassificationScanObjectNNOverall Accuracy86.7PointCMT
3D Point Cloud ClassificationModelNet40Mean Accuracy91.2PointNet2+PointCMT
3D Point Cloud ClassificationModelNet40Overall Accuracy94.4PointNet2+PointCMT
3D Point Cloud ReconstructionScanObjectNNMean Accuracy84.8PointCMT
3D Point Cloud ReconstructionScanObjectNNOverall Accuracy86.7PointCMT
3D Point Cloud ReconstructionModelNet40Mean Accuracy91.2PointNet2+PointCMT
3D Point Cloud ReconstructionModelNet40Overall Accuracy94.4PointNet2+PointCMT

Related Papers

Visual-Language Model Knowledge Distillation Method for Image Quality Assessment2025-07-21Touch in the Wild: Learning Fine-Grained Manipulation with a Portable Visuo-Tactile Gripper2025-07-20Spectral Bellman Method: Unifying Representation and Exploration in RL2025-07-17Boosting Team Modeling through Tempo-Relational Representation Learning2025-07-17Uncertainty-Aware Cross-Modal Knowledge Distillation with Prototype Learning for Multimodal Brain-Computer Interfaces2025-07-17Similarity-Guided Diffusion for Contrastive Sequential Recommendation2025-07-16Are encoders able to learn landmarkers for warm-starting of Hyperparameter Optimization?2025-07-16Language-Guided Contrastive Audio-Visual Masked Autoencoder with Automatically Generated Audio-Visual-Text Triplets from Videos2025-07-16