TasksSotADatasetsPapersMethodsSubmitAbout
Papers With Code 2

A community resource for machine learning research: papers, code, benchmarks, and state-of-the-art results.

Explore

Notable BenchmarksAll SotADatasetsPapersMethods

Community

Submit ResultsAbout

Data sourced from the PWC Archive (CC-BY-SA 4.0). Built by the community, for the community.

Papers/Image Augmentation Is All You Need: Regularizing Deep Rein...

Image Augmentation Is All You Need: Regularizing Deep Reinforcement Learning from Pixels

Ilya Kostrikov, Denis Yarats, Rob Fergus

2020-04-28ICLR 2021 1Atari Games 100kImage AugmentationReinforcement LearningData AugmentationContinuous ControlContrastive LearningAllreinforcement-learning
PaperPDFCode(official)CodeCodeCode

Abstract

We propose a simple data augmentation technique that can be applied to standard model-free reinforcement learning algorithms, enabling robust learning directly from pixels without the need for auxiliary losses or pre-training. The approach leverages input perturbations commonly used in computer vision tasks to regularize the value function. Existing model-free approaches, such as Soft Actor-Critic (SAC), are not able to train deep networks effectively from image pixels. However, the addition of our augmentation method dramatically improves SAC's performance, enabling it to reach state-of-the-art performance on the DeepMind control suite, surpassing model-based (Dreamer, PlaNet, and SLAC) methods and recently proposed contrastive learning (CURL). Our approach can be combined with any model-free reinforcement learning algorithm, requiring only minor modifications. An implementation can be found at https://sites.google.com/view/data-regularized-q.

Results

TaskDatasetMetricValueModel
Continuous ControlDeepMind Walker Walk (Images)Return921DrQ
Continuous ControlDeepMind Cup Catch (Images)Return963DrQ
Continuous ControlDeepMind Cheetah Run (Images)Return660DrQ
3DDeepMind Walker Walk (Images)Return921DrQ
3DDeepMind Cup Catch (Images)Return963DrQ
3DDeepMind Cheetah Run (Images)Return660DrQ
3D Face ModellingDeepMind Walker Walk (Images)Return921DrQ
3D Face ModellingDeepMind Cup Catch (Images)Return963DrQ
3D Face ModellingDeepMind Cheetah Run (Images)Return660DrQ

Related Papers

CUDA-L1: Improving CUDA Optimization via Contrastive Reinforcement Learning2025-07-18VisionThink: Smart and Efficient Vision Language Model via Reinforcement Learning2025-07-17Spectral Bellman Method: Unifying Representation and Exploration in RL2025-07-17Aligning Humans and Robots via Reinforcement Learning from Implicit Human Feedback2025-07-17VAR-MATH: Probing True Mathematical Reasoning in Large Language Models via Symbolic Multi-Instance Benchmarks2025-07-17QuestA: Expanding Reasoning Capacity in LLMs via Question Augmentation2025-07-17Inverse Reinforcement Learning Meets Large Language Model Post-Training: Basics, Advances, and Opportunities2025-07-17Autonomous Resource Management in Microservice Systems via Reinforcement Learning2025-07-17