TasksSotADatasetsPapersMethodsSubmitAbout
Papers With Code 2

A community resource for machine learning research: papers, code, benchmarks, and state-of-the-art results.

Explore

Notable BenchmarksAll SotADatasetsPapersMethods

Community

Submit ResultsAbout

Data sourced from the PWC Archive (CC-BY-SA 4.0). Built by the community, for the community.

Papers/MKConv: Multidimensional Feature Representation for Point ...

MKConv: Multidimensional Feature Representation for Point Cloud Analysis

Sungmin Woo, Dogyoon Lee, Sangwon Hwang, Woojin Kim, Sangyoun Lee

2021-07-27Semantic Segmentation3D Part Segmentation3D Point Cloud Classification
PaperPDF

Abstract

Despite the remarkable success of deep learning, an optimal convolution operation on point clouds remains elusive owing to their irregular data structure. Existing methods mainly focus on designing an effective continuous kernel function that can handle an arbitrary point in continuous space. Various approaches exhibiting high performance have been proposed, but we observe that the standard pointwise feature is represented by 1D channels and can become more informative when its representation involves additional spatial feature dimensions. In this paper, we present Multidimensional Kernel Convolution (MKConv), a novel convolution operator that learns to transform the point feature representation from a vector to a multidimensional matrix. Unlike standard point convolution, MKConv proceeds via two steps. (i) It first activates the spatial dimensions of local feature representation by exploiting multidimensional kernel weights. These spatially expanded features can represent their embedded information through spatial correlation as well as channel correlation in feature space, carrying more detailed local structure information. (ii) Then, discrete convolutions are applied to the multidimensional features which can be regarded as a grid-structured matrix. In this way, we can utilize the discrete convolutions for point cloud data without voxelization that suffers from information loss. Furthermore, we propose a spatial attention module, Multidimensional Local Attention (MLA), to provide comprehensive structure awareness within the local point set by reweighting the spatial feature dimensions. We demonstrate that MKConv has excellent applicability to point cloud processing tasks including object classification, object part segmentation, and scene semantic segmentation with superior results.

Results

TaskDatasetMetricValueModel
Semantic SegmentationS3DIS Area5mAcc75.1MKConv
Semantic SegmentationS3DIS Area5mIoU67.7MKConv
Semantic SegmentationS3DIS Area5oAcc89.6MKConv
Semantic SegmentationShapeNet-PartInstance Average IoU86.7MKConv
Shape Representation Of 3D Point CloudsModelNet40Overall Accuracy94MKConv
3D Point Cloud ClassificationModelNet40Overall Accuracy94MKConv
10-shot image generationS3DIS Area5mAcc75.1MKConv
10-shot image generationS3DIS Area5mIoU67.7MKConv
10-shot image generationS3DIS Area5oAcc89.6MKConv
10-shot image generationShapeNet-PartInstance Average IoU86.7MKConv
3D Point Cloud ReconstructionModelNet40Overall Accuracy94MKConv

Related Papers

SeC: Advancing Complex Video Object Segmentation via Progressive Concept Construction2025-07-21DiffOSeg: Omni Medical Image Segmentation via Multi-Expert Collaboration Diffusion Model2025-07-17SCORE: Scene Context Matters in Open-Vocabulary Remote Sensing Instance Segmentation2025-07-17Unified Medical Image Segmentation with State Space Modeling Snake2025-07-17A Privacy-Preserving Semantic-Segmentation Method Using Domain-Adaptation Technique2025-07-17SAMST: A Transformer framework based on SAM pseudo label filtering for remote sensing semi-supervised semantic segmentation2025-07-16Tomato Multi-Angle Multi-Pose Dataset for Fine-Grained Phenotyping2025-07-15U-RWKV: Lightweight medical image segmentation with direction-adaptive RWKV2025-07-15