TasksSotADatasetsPapersMethodsSubmitAbout
Papers With Code 2

A community resource for machine learning research: papers, code, benchmarks, and state-of-the-art results.

Explore

Notable BenchmarksAll SotADatasetsPapersMethods

Community

Submit ResultsAbout

Data sourced from the PWC Archive (CC-BY-SA 4.0). Built by the community, for the community.

Papers/Exploiting Inductive Bias in Transformer for Point Cloud C...

Exploiting Inductive Bias in Transformer for Point Cloud Classification and Segmentation

Zihao Li, Pan Gao, Hui Yuan, Ran Wei, Manoranjan Paul

2023-04-273D Object Classification3D Part SegmentationPoint Cloud Classification
PaperPDFCode(official)

Abstract

Discovering inter-point connection for efficient high-dimensional feature extraction from point coordinate is a key challenge in processing point cloud. Most existing methods focus on designing efficient local feature extractors while ignoring global connection, or vice versa. In this paper, we design a new Inductive Bias-aided Transformer (IBT) method to learn 3D inter-point relations, which considers both local and global attentions. Specifically, considering local spatial coherence, local feature learning is performed through Relative Position Encoding and Attentive Feature Pooling. We incorporate the learned locality into the Transformer module. The local feature affects value component in Transformer to modulate the relationship between channels of each point, which can enhance self-attention mechanism with locality based channel interaction. We demonstrate its superiority experimentally on classification and segmentation tasks. The code is available at: https://github.com/jiamang/IBT

Results

TaskDatasetMetricValueModel
Semantic SegmentationShapeNet-PartInstance Average IoU86.2Ours
3DModelNet40Classification Accuracy93.6Ours
Shape Representation Of 3D Point CloudsModelNet40Classification Accuracy93.6Ours
3D Object ClassificationModelNet40Classification Accuracy93.6Ours
3D Point Cloud ClassificationModelNet40Classification Accuracy93.6Ours
Point Cloud ClassificationISPRSAverage F182.8Ours
3D ClassificationModelNet40Classification Accuracy93.6Ours
10-shot image generationShapeNet-PartInstance Average IoU86.2Ours
3D Point Cloud ReconstructionModelNet40Classification Accuracy93.6Ours

Related Papers

BeyondRPC: A Contrastive and Augmentation-Driven Framework for Robust Point Cloud Understanding2025-06-15Rethinking Gradient-based Adversarial Attacks on Point Cloud Classification2025-05-28SMART-PC: Skeletal Model Adaptation for Robust Test-Time Training in Point Clouds2025-05-26Hybrid-Emba3D: Geometry-Aware and Cross-Path Feature Hybrid Enhanced State Space Model for Point Cloud Classification2025-05-16Optimal Control for Transformer Architectures: Enhancing Generalization, Robustness and Efficiency2025-05-16Streaming Sliced Optimal Transport2025-05-11FA-KPConv: Introducing Euclidean Symmetries to KPConv via Frame Averaging2025-05-07DG-MVP: 3D Domain Generalization via Multiple Views of Point Clouds for Classification2025-04-16