TasksSotADatasetsPapersMethodsSubmitAbout
Papers With Code 2

A community resource for machine learning research: papers, code, benchmarks, and state-of-the-art results.

Explore

Notable BenchmarksAll SotADatasetsPapersMethods

Community

Submit ResultsAbout

Data sourced from the PWC Archive (CC-BY-SA 4.0). Built by the community, for the community.

Papers/Improving Computational Efficiency in Visual Reinforcement...

Improving Computational Efficiency in Visual Reinforcement Learning via Stored Embeddings

Lili Chen, Kimin Lee, Aravind Srinivas, Pieter Abbeel

2021-03-04NeurIPS 2021 12Reinforcement LearningAtari GamesTransfer Learningreinforcement-learning
PaperPDFCode(official)

Abstract

Recent advances in off-policy deep reinforcement learning (RL) have led to impressive success in complex tasks from visual observations. Experience replay improves sample-efficiency by reusing experiences from the past, and convolutional neural networks (CNNs) process high-dimensional inputs effectively. However, such techniques demand high memory and computational bandwidth. In this paper, we present Stored Embeddings for Efficient Reinforcement Learning (SEER), a simple modification of existing off-policy RL methods, to address these computational and memory requirements. To reduce the computational overhead of gradient updates in CNNs, we freeze the lower layers of CNN encoders early in training due to early convergence of their parameters. Additionally, we reduce memory requirements by storing the low-dimensional latent vectors for experience replay instead of high-dimensional images, enabling an adaptive increase in the replay buffer capacity, a useful technique in constrained-memory settings. In our experiments, we show that SEER does not degrade the performance of RL agents while significantly saving computation and memory across a diverse set of DeepMind Control environments and Atari games.

Results

TaskDatasetMetricValueModel
Atari GamesAtari 2600 KrullScore3277.5Rainbow+SEER
Atari GamesAtari 2600 AmidarScore250.5Rainbow+SEER
Atari GamesAtari 2600 Crazy ClimberScore28066Rainbow+SEER
Atari GamesAtari 2600 AlienScore1172.6Rainbow+SEER
Atari GamesAtari 2600 SeaquestScore561.2Rainbow+SEER
Atari GamesAtari 2600 Bank HeistScore276.6Rainbow+SEER
Atari GamesAtari 2600 Q*BertScore4123.5Qbert Rainbow+SEER
Atari GamesAtari 2600 Road RunnerScore11794Rainbow+SEER
Video GamesAtari 2600 KrullScore3277.5Rainbow+SEER
Video GamesAtari 2600 AmidarScore250.5Rainbow+SEER
Video GamesAtari 2600 Crazy ClimberScore28066Rainbow+SEER
Video GamesAtari 2600 AlienScore1172.6Rainbow+SEER
Video GamesAtari 2600 SeaquestScore561.2Rainbow+SEER
Video GamesAtari 2600 Bank HeistScore276.6Rainbow+SEER
Video GamesAtari 2600 Q*BertScore4123.5Qbert Rainbow+SEER
Video GamesAtari 2600 Road RunnerScore11794Rainbow+SEER

Related Papers

CUDA-L1: Improving CUDA Optimization via Contrastive Reinforcement Learning2025-07-18RaMen: Multi-Strategy Multi-Modal Learning for Bundle Construction2025-07-18VisionThink: Smart and Efficient Vision Language Model via Reinforcement Learning2025-07-17Spectral Bellman Method: Unifying Representation and Exploration in RL2025-07-17Aligning Humans and Robots via Reinforcement Learning from Implicit Human Feedback2025-07-17VAR-MATH: Probing True Mathematical Reasoning in Large Language Models via Symbolic Multi-Instance Benchmarks2025-07-17QuestA: Expanding Reasoning Capacity in LLMs via Question Augmentation2025-07-17Inverse Reinforcement Learning Meets Large Language Model Post-Training: Basics, Advances, and Opportunities2025-07-17