TasksSotADatasetsPapersMethodsSubmitAbout
Papers With Code 2

A community resource for machine learning research: papers, code, benchmarks, and state-of-the-art results.

Explore

Notable BenchmarksAll SotADatasetsPapersMethods

Community

Submit ResultsAbout

Data sourced from the PWC Archive (CC-BY-SA 4.0). Built by the community, for the community.

Papers/Adaptivity without Compromise: A Momentumized, Adaptive, D...

Adaptivity without Compromise: A Momentumized, Adaptive, Dual Averaged Gradient Method for Stochastic Optimization

Aaron Defazio, Samy Jelassi

2021-01-26Stochastic Optimization
PaperPDFCode(official)CodeCodeCodeCode

Abstract

We introduce MADGRAD, a novel optimization method in the family of AdaGrad adaptive gradient methods. MADGRAD shows excellent performance on deep learning optimization problems from multiple fields, including classification and image-to-image tasks in vision, and recurrent and bidirectionally-masked models in natural language processing. For each of these tasks, MADGRAD matches or outperforms both SGD and ADAM in test set performance, even on problems for which adaptive methods normally perform poorly.

Related Papers

First-order methods for stochastic and finite-sum convex optimization with deterministic constraints2025-06-25Convergence of Momentum-Based Optimization Algorithms with Time-Varying Parameters2025-06-13Underage Detection through a Multi-Task and MultiAge Approach for Screening Minors in Unconstrained Imagery2025-06-12The Sample Complexity of Parameter-Free Stochastic Convex Optimization2025-06-12"What are my options?": Explaining RL Agents with Diverse Near-Optimal Alternatives (Extended)2025-06-11PADAM: Parallel averaged Adam reduces the error for stochastic optimization in scientific machine learning2025-05-28Online distributed optimization for spatio-temporally constrained real-time peer-to-peer energy trading2025-05-28Distribution free M-estimation2025-05-28