TasksSotADatasetsPapersMethodsSubmitAbout
Papers With Code 2

A community resource for machine learning research: papers, code, benchmarks, and state-of-the-art results.

Explore

Notable BenchmarksAll SotADatasetsPapersMethods

Community

Submit ResultsAbout

Data sourced from the PWC Archive (CC-BY-SA 4.0). Built by the community, for the community.

Papers/First return, then explore

First return, then explore

Adrien Ecoffet, Joost Huizinga, Joel Lehman, Kenneth O. Stanley, Jeff Clune

2020-04-27Reinforcement LearningAtari GamesMontezuma's Revengereinforcement-learning
PaperPDFCodeCode(official)

Abstract

The promise of reinforcement learning is to solve complex sequential decision problems autonomously by specifying a high-level reward function only. However, reinforcement learning algorithms struggle when, as is often the case, simple and intuitive rewards provide sparse and deceptive feedback. Avoiding these pitfalls requires thoroughly exploring the environment, but creating algorithms that can do so remains one of the central challenges of the field. We hypothesise that the main impediment to effective exploration originates from algorithms forgetting how to reach previously visited states ("detachment") and from failing to first return to a state before exploring from it ("derailment"). We introduce Go-Explore, a family of algorithms that addresses these two challenges directly through the simple principles of explicitly remembering promising states and first returning to such states before intentionally exploring. Go-Explore solves all heretofore unsolved Atari games and surpasses the state of the art on all hard-exploration games, with orders of magnitude improvements on the grand challenges Montezuma's Revenge and Pitfall. We also demonstrate the practical potential of Go-Explore on a sparse-reward pick-and-place robotics task. Additionally, we show that adding a goal-conditioned policy can further improve Go-Explore's exploration efficiency and enable it to handle stochasticity throughout training. The substantial performance gains from Go-Explore suggest that the simple principles of remembering states, returning to them, and exploring from them are a powerful and general approach to exploration, an insight that may prove critical to the creation of truly intelligent learning agents.

Results

TaskDatasetMetricValueModel
Atari GamesAtari 2600 SkiingScore-3660Go-Explore
Atari GamesAtari 2600 CentipedeScore1422628Go-Explore
Atari GamesAtari 2600 FreewayScore34Go-Explore
Atari GamesAtari 2600 Montezuma's RevengeScore43791Go-Explore
Atari GamesAtari 2600 GravitarScore7588Go-Explore
Atari GamesAtari 2600 BowlingScore260Go-Explore
Atari GamesAtari 2600 Pitfall!Score6954Go-Explore
Atari GamesAtari 2600 SolarisScore19671Go-Explore
Atari GamesAtari 2600 BerzerkScore197376Go-Explore
Atari GamesAtari 2600 VentureScore2281Go-Explore
Atari GamesAtari 2600 Private EyeScore95756Go-Explore
Video GamesAtari 2600 SkiingScore-3660Go-Explore
Video GamesAtari 2600 CentipedeScore1422628Go-Explore
Video GamesAtari 2600 FreewayScore34Go-Explore
Video GamesAtari 2600 Montezuma's RevengeScore43791Go-Explore
Video GamesAtari 2600 GravitarScore7588Go-Explore
Video GamesAtari 2600 BowlingScore260Go-Explore
Video GamesAtari 2600 Pitfall!Score6954Go-Explore
Video GamesAtari 2600 SolarisScore19671Go-Explore
Video GamesAtari 2600 BerzerkScore197376Go-Explore
Video GamesAtari 2600 VentureScore2281Go-Explore
Video GamesAtari 2600 Private EyeScore95756Go-Explore

Related Papers

CUDA-L1: Improving CUDA Optimization via Contrastive Reinforcement Learning2025-07-18VisionThink: Smart and Efficient Vision Language Model via Reinforcement Learning2025-07-17Spectral Bellman Method: Unifying Representation and Exploration in RL2025-07-17Aligning Humans and Robots via Reinforcement Learning from Implicit Human Feedback2025-07-17VAR-MATH: Probing True Mathematical Reasoning in Large Language Models via Symbolic Multi-Instance Benchmarks2025-07-17QuestA: Expanding Reasoning Capacity in LLMs via Question Augmentation2025-07-17Inverse Reinforcement Learning Meets Large Language Model Post-Training: Basics, Advances, and Opportunities2025-07-17Autonomous Resource Management in Microservice Systems via Reinforcement Learning2025-07-17