TasksSotADatasetsPapersMethodsSubmitAbout
Papers With Code 2

A community resource for machine learning research: papers, code, benchmarks, and state-of-the-art results.

Explore

Notable BenchmarksAll SotADatasetsPapersMethods

Community

Submit ResultsAbout

Data sourced from the PWC Archive (CC-BY-SA 4.0). Built by the community, for the community.

Papers/Meta-Learning with a Geometry-Adaptive Preconditioner

Meta-Learning with a Geometry-Adaptive Preconditioner

Suhyun Kang, Duhun Hwang, Moonjung Eo, Taesup Kim, Wonjong Rhee

2023-04-04CVPR 2023 1Few-Shot LearningMeta-LearningFew-Shot Image Classification
PaperPDFCode(official)

Abstract

Model-agnostic meta-learning (MAML) is one of the most successful meta-learning algorithms. It has a bi-level optimization structure where the outer-loop process learns a shared initialization and the inner-loop process optimizes task-specific weights. Although MAML relies on the standard gradient descent in the inner-loop, recent studies have shown that controlling the inner-loop's gradient descent with a meta-learned preconditioner can be beneficial. Existing preconditioners, however, cannot simultaneously adapt in a task-specific and path-dependent way. Additionally, they do not satisfy the Riemannian metric condition, which can enable the steepest descent learning with preconditioned gradient. In this study, we propose Geometry-Adaptive Preconditioned gradient descent (GAP) that can overcome the limitations in MAML; GAP can efficiently meta-learn a preconditioner that is dependent on task-specific parameters, and its preconditioner can be shown to be a Riemannian metric. Thanks to the two properties, the geometry-adaptive preconditioner is effective for improving the inner-loop optimization. Experiment results show that GAP outperforms the state-of-the-art MAML family and preconditioned gradient descent-MAML (PGD-MAML) family in a variety of few-shot learning tasks. Code is available at: https://github.com/Suhyun777/CVPR23-GAP.

Results

TaskDatasetMetricValueModel
Image ClassificationMini-Imagenet 5-way (5-shot)Accuracy71.55GAP
Image ClassificationMini-Imagenet 5-way (5-shot)Accuracy70.75Approximate GAP
Image ClassificationMini-Imagenet 5-way (1-shot)Accuracy54.86GAP
Image ClassificationMini-Imagenet 5-way (1-shot)Accuracy53.52Approximate GAP
Image ClassificationTiered ImageNet 5-way (1-shot)Accuracy57.6GAP
Image ClassificationTiered ImageNet 5-way (1-shot)Accuracy56.86Approximate GAP
Image ClassificationTiered ImageNet 5-way (5-shot)Accuracy74.9GAP
Image ClassificationTiered ImageNet 5-way (5-shot)Accuracy74.41Approximate GAP
Few-Shot Image ClassificationMini-Imagenet 5-way (5-shot)Accuracy71.55GAP
Few-Shot Image ClassificationMini-Imagenet 5-way (5-shot)Accuracy70.75Approximate GAP
Few-Shot Image ClassificationMini-Imagenet 5-way (1-shot)Accuracy54.86GAP
Few-Shot Image ClassificationMini-Imagenet 5-way (1-shot)Accuracy53.52Approximate GAP
Few-Shot Image ClassificationTiered ImageNet 5-way (1-shot)Accuracy57.6GAP
Few-Shot Image ClassificationTiered ImageNet 5-way (1-shot)Accuracy56.86Approximate GAP
Few-Shot Image ClassificationTiered ImageNet 5-way (5-shot)Accuracy74.9GAP
Few-Shot Image ClassificationTiered ImageNet 5-way (5-shot)Accuracy74.41Approximate GAP

Related Papers

GLAD: Generalizable Tuning for Vision-Language Models2025-07-17Are encoders able to learn landmarkers for warm-starting of Hyperparameter Optimization?2025-07-16Imbalanced Regression Pipeline Recommendation2025-07-16CLID-MU: Cross-Layer Information Divergence Based Meta Update Strategy for Learning with Noisy Labels2025-07-16Mixture of Experts in Large Language Models2025-07-15Iceberg: Enhancing HLS Modeling with Synthetic Data2025-07-14Meta-Reinforcement Learning for Fast and Data-Efficient Spectrum Allocation in Dynamic Wireless Networks2025-07-13ViT-ProtoNet for Few-Shot Image Classification: A Multi-Benchmark Evaluation2025-07-12