TasksSotADatasetsPapersMethodsSubmitAbout
Papers With Code 2

A community resource for machine learning research: papers, code, benchmarks, and state-of-the-art results.

Explore

Notable BenchmarksAll SotADatasetsPapersMethods

Community

Submit ResultsAbout

Data sourced from the PWC Archive (CC-BY-SA 4.0). Built by the community, for the community.

Papers/Parameter-Efficient Fine-Tuning in Spectral Domain for Poi...

Parameter-Efficient Fine-Tuning in Spectral Domain for Point Cloud Learning

Dingkang Liang, Tianrui Feng, Xin Zhou, Yumeng Zhang, Zhikang Zou, Xiang Bai

2024-10-103D Parameter-Efficient Fine-Tuning for Classificationparameter-efficient fine-tuning3D Point Cloud ClassificationGeneral KnowledgePoint Cloud Classification
PaperPDFCode(official)

Abstract

Recently, leveraging pre-training techniques to enhance point cloud models has become a hot research topic. However, existing approaches typically require full fine-tuning of pre-trained models to achieve satisfied performance on downstream tasks, accompanying storage-intensive and computationally demanding. To address this issue, we propose a novel Parameter-Efficient Fine-Tuning (PEFT) method for point cloud, called PointGST (Point cloud Graph Spectral Tuning). PointGST freezes the pre-trained model and introduces a lightweight, trainable Point Cloud Spectral Adapter (PCSA) to fine-tune parameters in the spectral domain. The core idea is built on two observations: 1) The inner tokens from frozen models might present confusion in the spatial domain; 2) Task-specific intrinsic information is important for transferring the general knowledge to the downstream task. Specifically, PointGST transfers the point tokens from the spatial domain to the spectral domain, effectively de-correlating confusion among tokens via using orthogonal components for separating. Moreover, the generated spectral basis involves intrinsic information about the downstream point clouds, enabling more targeted tuning. As a result, PointGST facilitates the efficient transfer of general knowledge to downstream tasks while significantly reducing training costs. Extensive experiments on challenging point cloud datasets across various tasks demonstrate that PointGST not only outperforms its fully fine-tuning counterpart but also significantly reduces trainable parameters, making it a promising solution for efficient point cloud learning. It improves upon a solid baseline by +2.28%, 1.16%, and 2.78%, resulting in 99.48%, 97.76%, and 96.18% on the ScanObjNN OBJ BG, OBJ OBLY, and PB T50 RS datasets, respectively. This advancement establishes a new state-of-the-art, using only 0.67% of the trainable parameters.

Results

TaskDatasetMetricValueModel
Shape Representation Of 3D Point CloudsScanObjectNNOBJ-BG (OA)99.48PointGST
Shape Representation Of 3D Point CloudsScanObjectNNOBJ-ONLY (OA)97.76PointGST
Shape Representation Of 3D Point CloudsScanObjectNNOverall Accuracy96.18PointGST
Shape Representation Of 3D Point CloudsModelNet40Overall Accuracy95.3PointGST
3D Point Cloud ClassificationScanObjectNNOBJ-BG (OA)99.48PointGST
3D Point Cloud ClassificationScanObjectNNOBJ-ONLY (OA)97.76PointGST
3D Point Cloud ClassificationScanObjectNNOverall Accuracy96.18PointGST
3D Point Cloud ClassificationModelNet40Overall Accuracy95.3PointGST
3D Point Cloud ReconstructionScanObjectNNOBJ-BG (OA)99.48PointGST
3D Point Cloud ReconstructionScanObjectNNOBJ-ONLY (OA)97.76PointGST
3D Point Cloud ReconstructionScanObjectNNOverall Accuracy96.18PointGST
3D Point Cloud ReconstructionModelNet40Overall Accuracy95.3PointGST

Related Papers

Efficient Adaptation of Pre-trained Vision Transformer underpinned by Approximately Orthogonal Fine-Tuning Strategy2025-07-17PROL : Rehearsal Free Continual Learning in Streaming Data via Prompt Online Learning2025-07-16Reinforcement Fine-Tuning Naturally Mitigates Forgetting in Continual Post-Training2025-07-07LoSiA: Efficient High-Rank Fine-Tuning via Subnet Localization and Optimization2025-07-06Exploring Adapter Design Tradeoffs for Low Resource Music Generation2025-06-26WordCon: Word-level Typography Control in Scene Text Rendering2025-06-26Optimising Language Models for Downstream Tasks: A Post-Training Perspective2025-06-26Progtuning: Progressive Fine-tuning Framework for Transformer-based Language Models2025-06-26