TasksSotADatasetsPapersMethodsSubmitAbout
Papers With Code 2

A community resource for machine learning research: papers, code, benchmarks, and state-of-the-art results.

Explore

Notable BenchmarksAll SotADatasetsPapersMethods

Community

Submit ResultsAbout

Data sourced from the PWC Archive (CC-BY-SA 4.0). Built by the community, for the community.

Papers/Training Deep Networks without Learning Rates Through Coin...

Training Deep Networks without Learning Rates Through Coin Betting

Francesco Orabona, Tatiana Tommasi

2017-05-22NeurIPS 2017 12Stochastic Optimization
PaperPDFCodeCodeCodeCodeCodeCode(official)

Abstract

Deep learning methods achieve state-of-the-art performance in many application scenarios. Yet, these methods require a significant amount of hyperparameters tuning in order to achieve the best results. In particular, tuning the learning rates in the stochastic optimization process is still one of the main bottlenecks. In this paper, we propose a new stochastic gradient descent procedure for deep networks that does not require any learning rate setting. Contrary to previous methods, we do not adapt the learning rates nor we make use of the assumed curvature of the objective function. Instead, we reduce the optimization process to a game of betting on a coin and propose a learning-rate-free optimal algorithm for this scenario. Theoretical convergence is proven for convex and quasi-convex functions and empirical evidence shows the advantage of our algorithm over popular stochastic gradient algorithms.

Results

TaskDatasetMetricValueModel
Stochastic OptimizationMNISTNLL0.0541MLP

Related Papers

First-order methods for stochastic and finite-sum convex optimization with deterministic constraints2025-06-25Convergence of Momentum-Based Optimization Algorithms with Time-Varying Parameters2025-06-13Underage Detection through a Multi-Task and MultiAge Approach for Screening Minors in Unconstrained Imagery2025-06-12The Sample Complexity of Parameter-Free Stochastic Convex Optimization2025-06-12"What are my options?": Explaining RL Agents with Diverse Near-Optimal Alternatives (Extended)2025-06-11PADAM: Parallel averaged Adam reduces the error for stochastic optimization in scientific machine learning2025-05-28Online distributed optimization for spatio-temporally constrained real-time peer-to-peer energy trading2025-05-28Distribution free M-estimation2025-05-28