TasksSotADatasetsPapersMethodsSubmitAbout
Papers With Code 2

A community resource for machine learning research: papers, code, benchmarks, and state-of-the-art results.

Explore

Notable BenchmarksAll SotADatasetsPapersMethods

Community

Submit ResultsAbout

Data sourced from the PWC Archive (CC-BY-SA 4.0). Built by the community, for the community.

Papers/ADADELTA: An Adaptive Learning Rate Method

ADADELTA: An Adaptive Learning Rate Method

Matthew D. Zeiler

2012-12-22General Classification
PaperPDFCodeCodeCodeCodeCodeCodeCodeCodeCodeCodeCodeCode

Abstract

We present a novel per-dimension learning rate method for gradient descent called ADADELTA. The method dynamically adapts over time using only first order information and has minimal computational overhead beyond vanilla stochastic gradient descent. The method requires no manual tuning of a learning rate and appears robust to noisy gradient information, different model architecture choices, various data modalities and selection of hyperparameters. We show promising results compared to other methods on the MNIST digit classification task using a single machine and on a large scale voice dataset in a distributed cluster environment.

Related Papers

Specialized text classification: an approach to classifying Open Banking transactions2025-04-10Universal Training of Neural Networks to Achieve Bayes Optimal Classification Accuracy2025-01-13Revisiting MLLMs: An In-Depth Analysis of Image Classification Abilities2024-12-21Using Instruction-Tuned Large Language Models to Identify Indicators of Vulnerability in Police Incident Narratives2024-12-16Ramsey Theorems for Trees and a General 'Private Learning Implies Online Learning' Theorem2024-07-10Cross-Block Fine-Grained Semantic Cascade for Skeleton-Based Sports Action Recognition2024-04-30DiffuseMix: Label-Preserving Data Augmentation with Diffusion Models2024-04-05Large Stepsize Gradient Descent for Logistic Loss: Non-Monotonicity of the Loss Improves Optimization Efficiency2024-02-24