TasksSotADatasetsPapersMethodsSubmitAbout
Papers With Code 2

A community resource for machine learning research: papers, code, benchmarks, and state-of-the-art results.

Explore

Notable BenchmarksAll SotADatasetsPapersMethods

Community

Submit ResultsAbout

Data sourced from the PWC Archive (CC-BY-SA 4.0). Built by the community, for the community.

Papers/Faster Meta Update Strategy for Noise-Robust Deep Learning

Faster Meta Update Strategy for Noise-Robust Deep Learning

Youjiang Xu, Linchao Zhu, Lu Jiang, Yi Yang

2021-04-30Meta-LearningImage ClassificationLearning with noisy labelsDeep Learning
PaperPDFCode(official)

Abstract

It has been shown that deep neural networks are prone to overfitting on biased training data. Towards addressing this issue, meta-learning employs a meta model for correcting the training bias. Despite the promising performances, super slow training is currently the bottleneck in the meta learning approaches. In this paper, we introduce a novel Faster Meta Update Strategy (FaMUS) to replace the most expensive step in the meta gradient computation with a faster layer-wise approximation. We empirically find that FaMUS yields not only a reasonably accurate but also a low-variance approximation of the meta gradient. We conduct extensive experiments to verify the proposed method on two tasks. We show our method is able to save two-thirds of the training time while still maintaining the comparable or achieving even better generalization performance. In particular, our method achieves the state-of-the-art performance on both synthetic and realistic noisy labels, and obtains promising performance on long-tailed recognition on standard benchmarks.

Results

TaskDatasetMetricValueModel
Image ClassificationRed MiniImageNet 20% label noiseAccuracy51.42FaMUS
Image ClassificationRed MiniImageNet 60% label noiseAccuracy45.1FaMUS
Image ClassificationCIFAR-10, 60% Symmetric NoisePercentage correct91.3MentorMix
Image ClassificationCIFAR-10, 60% Symmetric NoisePercentage correct26.42FaMUS
Image ClassificationRed MiniImageNet 40% label noiseAccuracy48.06FaMUS
Image ClassificationCIFAR-100, 60% Symmetric NoisePercentage correct64.6MentorMix
Image Classificationmini WebVision 1.0ImageNet Top-1 Accuracy77FaMUS
Image Classificationmini WebVision 1.0ImageNet Top-5 Accuracy92.76FaMUS
Image Classificationmini WebVision 1.0Top-1 Accuracy79.4FaMUS
Image Classificationmini WebVision 1.0Top-5 Accuracy92.8FaMUS
Image ClassificationCIFAR-10, 40% Symmetric NoisePercentage correct95.37FaMUS
Image ClassificationCIFAR-10, 40% Symmetric NoisePercentage correct94.2MentorMix
Image ClassificationRed MiniImageNet 80% label noiseAccuracy35.5FaMUS
Image ClassificationCIFAR-100, 40% Symmetric NoisePercentage correct75.91FaMUS
Image ClassificationCIFAR-100, 40% Symmetric NoisePercentage correct71.3MentorMix

Related Papers

Automatic Classification and Segmentation of Tunnel Cracks Based on Deep Learning and Visual Explanations2025-07-18Adversarial attacks to image classification systems using evolutionary algorithms2025-07-17Efficient Adaptation of Pre-trained Vision Transformer underpinned by Approximately Orthogonal Fine-Tuning Strategy2025-07-17Federated Learning for Commercial Image Sources2025-07-17MUPAX: Multidimensional Problem Agnostic eXplainable AI2025-07-17Are encoders able to learn landmarkers for warm-starting of Hyperparameter Optimization?2025-07-16Imbalanced Regression Pipeline Recommendation2025-07-16CLID-MU: Cross-Layer Information Divergence Based Meta Update Strategy for Learning with Noisy Labels2025-07-16