rQdia: Regularizing Q-Value Distributions With Image Augmentation
Sam Lerman, Jing Bi
Abstract
rQdia regularizes Q-value distributions with augmented images in pixel-based deep reinforcement learning. With a simple auxiliary loss, that equalizes these distributions via MSE, rQdia boosts DrQ and SAC on 9/12 and 10/12 tasks respectively in the MuJoCo Continuous Control Suite from pixels, and Data-Efficient Rainbow on 18/26 Atari Arcade environments. Gains are measured in both sample efficiency and longer-term training. Moreover, the addition of rQdia finally propels model-free continuous control from pixels over the state encoding baseline.
Related Papers
Aligning Humans and Robots via Reinforcement Learning from Implicit Human Feedback2025-07-17Supervised Fine Tuning on Curated Data is Reinforcement Learning (and can be improved)2025-07-17Turning Sand to Gold: Recycling Data to Bridge On-Policy and Off-Policy Learning via Causal Bound2025-07-15Deep Reinforcement Learning with Gradient Eligibility Traces2025-07-12Prompt-Free Conditional Diffusion for Multi-object Image Augmentation2025-07-08Safe Domain Randomization via Uncertainty-Aware Out-of-Distribution Detection and Policy Adaptation2025-07-08Detecting and Mitigating Reward Hacking in Reinforcement Learning Systems: A Comprehensive Empirical Study2025-07-08Generalized Adaptive Transfer Network: Enhancing Transfer Learning in Reinforcement Learning Across Domains2025-07-02