TasksSotADatasetsPapersMethodsSubmitAbout
Papers With Code 2

A community resource for machine learning research: papers, code, benchmarks, and state-of-the-art results.

Explore

Notable BenchmarksAll SotADatasetsPapersMethods

Community

Submit ResultsAbout

Data sourced from the PWC Archive (CC-BY-SA 4.0). Built by the community, for the community.

Papers/Attention-based Point Cloud Edge Sampling

Attention-based Point Cloud Edge Sampling

Chengzhi Wu, Junwei Zheng, Julius Pfrommer, Jürgen Beyerer

2023-02-28CVPR 2023 13D Part Segmentation3D Point Cloud Classification
PaperPDFCode(official)

Abstract

Point cloud sampling is a less explored research topic for this data representation. The most commonly used sampling methods are still classical random sampling and farthest point sampling. With the development of neural networks, various methods have been proposed to sample point clouds in a task-based learning manner. However, these methods are mostly generative-based, rather than selecting points directly using mathematical statistics. Inspired by the Canny edge detection algorithm for images and with the help of the attention mechanism, this paper proposes a non-generative Attention-based Point cloud Edge Sampling method (APES), which captures salient points in the point cloud outline. Both qualitative and quantitative experimental results show the superior performance of our sampling method on common benchmark tasks.

Results

TaskDatasetMetricValueModel
Semantic SegmentationShapeNet-PartClass Average IoU83.7APES (global_based downsample)
Semantic SegmentationShapeNet-PartInstance Average IoU85.8APES (global_based downsample)
Semantic SegmentationShapeNet-PartClass Average IoU83.1APES (local_based downsample)
Semantic SegmentationShapeNet-PartInstance Average IoU85.6APES (local_based downsample)
Shape Representation Of 3D Point CloudsModelNet40Overall Accuracy93.8APES (global-based downsample)
Shape Representation Of 3D Point CloudsModelNet40Overall Accuracy93.5APES (local-based downsample)
3D Point Cloud ClassificationModelNet40Overall Accuracy93.8APES (global-based downsample)
3D Point Cloud ClassificationModelNet40Overall Accuracy93.5APES (local-based downsample)
10-shot image generationShapeNet-PartClass Average IoU83.7APES (global_based downsample)
10-shot image generationShapeNet-PartInstance Average IoU85.8APES (global_based downsample)
10-shot image generationShapeNet-PartClass Average IoU83.1APES (local_based downsample)
10-shot image generationShapeNet-PartInstance Average IoU85.6APES (local_based downsample)
3D Point Cloud ReconstructionModelNet40Overall Accuracy93.8APES (global-based downsample)
3D Point Cloud ReconstructionModelNet40Overall Accuracy93.5APES (local-based downsample)

Related Papers

Asymmetric Dual Self-Distillation for 3D Self-Supervised Representation Learning2025-06-26Rethinking Gradient-based Adversarial Attacks on Point Cloud Classification2025-05-28SMART-PC: Skeletal Model Adaptation for Robust Test-Time Training in Point Clouds2025-05-26DG-MVP: 3D Domain Generalization via Multiple Views of Point Clouds for Classification2025-04-16HoloPart: Generative 3D Part Amodal Segmentation2025-04-10Introducing the Short-Time Fourier Kolmogorov Arnold Network: A Dynamic Graph CNN Approach for Tree Species Classification in 3D Point Clouds2025-03-31Open-Vocabulary Semantic Part Segmentation of 3D Human2025-02-27Point-LN: A Lightweight Framework for Efficient Point Cloud Classification Using Non-Parametric Positional Encoding2025-01-24