TasksSotADatasetsPapersMethodsSubmitAbout
Papers With Code 2

A community resource for machine learning research: papers, code, benchmarks, and state-of-the-art results.

Explore

Notable BenchmarksAll SotADatasetsPapersMethods

Community

Submit ResultsAbout

Data sourced from the PWC Archive (CC-BY-SA 4.0). Built by the community, for the community.

Papers/Learning Geometry-Disentangled Representation for Compleme...

Learning Geometry-Disentangled Representation for Complementary Understanding of 3D Object Point Cloud

Mutian Xu, Junhao Zhang, Zhipeng Zhou, Mingye Xu, Xiaojuan Qi, Yu Qiao

2020-12-20Point Cloud Segmentation3D Object Classification3D Part Segmentation3D Point Cloud ClassificationPoint Cloud Classification
PaperPDFCodeCode(official)Code

Abstract

In 2D image processing, some attempts decompose images into high and low frequency components for describing edge and smooth parts respectively. Similarly, the contour and flat area of 3D objects, such as the boundary and seat area of a chair, describe different but also complementary geometries. However, such investigation is lost in previous deep networks that understand point clouds by directly treating all points or local patches equally. To solve this problem, we propose Geometry-Disentangled Attention Network (GDANet). GDANet introduces Geometry-Disentangle Module to dynamically disentangle point clouds into the contour and flat part of 3D objects, respectively denoted by sharp and gentle variation components. Then GDANet exploits Sharp-Gentle Complementary Attention Module that regards the features from sharp and gentle variation components as two holistic representations, and pays different attentions to them while fusing them respectively with original point cloud features. In this way, our method captures and refines the holistic and complementary 3D geometric semantics from two distinct disentangled components to supplement the local information. Extensive experiments on 3D object classification and segmentation benchmarks demonstrate that GDANet achieves the state-of-the-arts with fewer parameters. Code is released on https://github.com/mutianxu/GDANet.

Results

TaskDatasetMetricValueModel
Semantic SegmentationShapeNet-PartClass Average IoU85GDANet
Semantic SegmentationShapeNet-PartInstance Average IoU86.5GDANet
Shape Representation Of 3D Point CloudsModelNet40Overall Accuracy93.8GDANet
3D Point Cloud ClassificationModelNet40Overall Accuracy93.8GDANet
Point Cloud ClassificationPointCloud-Cmean Corruption Error (mCE)0.892GDANet
Point Cloud SegmentationPointCloud-Cmean Corruption Error (mCE)0.923GDANet
10-shot image generationShapeNet-PartClass Average IoU85GDANet
10-shot image generationShapeNet-PartInstance Average IoU86.5GDANet
3D Point Cloud ReconstructionModelNet40Overall Accuracy93.8GDANet

Related Papers

TSDASeg: A Two-Stage Model with Direct Alignment for Interactive Point Cloud Segmentation2025-06-26Asymmetric Dual Self-Distillation for 3D Self-Supervised Representation Learning2025-06-26BeyondRPC: A Contrastive and Augmentation-Driven Framework for Robust Point Cloud Understanding2025-06-15Enhancing Human-Robot Collaboration: A Sim2Real Domain Adaptation Algorithm for Point Cloud Segmentation in Industrial Environments2025-06-11OpenMaskDINO3D : Reasoning 3D Segmentation via Large Language Model2025-06-05Point Cloud Segmentation of Agricultural Vehicles using 3D Gaussian Splatting2025-06-05Rethinking Gradient-based Adversarial Attacks on Point Cloud Classification2025-05-28SMART-PC: Skeletal Model Adaptation for Robust Test-Time Training in Point Clouds2025-05-26