TasksSotADatasetsPapersMethodsSubmitAbout
Papers With Code 2

A community resource for machine learning research: papers, code, benchmarks, and state-of-the-art results.

Explore

Notable BenchmarksAll SotADatasetsPapersMethods

Community

Submit ResultsAbout

Data sourced from the PWC Archive (CC-BY-SA 4.0). Built by the community, for the community.

Papers/Large-Scale Study of Curiosity-Driven Learning

Large-Scale Study of Curiosity-Driven Learning

Yuri Burda, Harri Edwards, Deepak Pathak, Amos Storkey, Trevor Darrell, Alexei A. Efros

2018-08-13ICLR 2019 5Reinforcement LearningAtari GamesSNES GamesPrediction
PaperPDFCodeCodeCodeCodeCode(official)

Abstract

Reinforcement learning algorithms rely on carefully engineering environment rewards that are extrinsic to the agent. However, annotating each environment with hand-designed, dense rewards is not scalable, motivating the need for developing reward functions that are intrinsic to the agent. Curiosity is a type of intrinsic reward function which uses prediction error as reward signal. In this paper: (a) We perform the first large-scale study of purely curiosity-driven learning, i.e. without any extrinsic rewards, across 54 standard benchmark environments, including the Atari game suite. Our results show surprisingly good performance, and a high degree of alignment between the intrinsic curiosity objective and the hand-designed extrinsic rewards of many game environments. (b) We investigate the effect of using different feature spaces for computing prediction error and show that random features are sufficient for many popular RL game benchmarks, but learned features appear to generalize better (e.g. to novel game levels in Super Mario Bros.). (c) We demonstrate limitations of the prediction-based rewards in stochastic setups. Game-play videos and code are at https://pathak22.github.io/large-scale-curiosity/

Results

TaskDatasetMetricValueModel
Atari GamesAtari 2600 FreewayScore32.8Intrinsic Reward Agent
Atari GamesAtari 2600 Montezuma's RevengeScore2504.6Intrinsic Reward Agent
Atari GamesAtari 2600 GravitarScore1165.1Intrinsic Reward Agent
Atari GamesAtari 2600 VentureScore416Intrinsic Reward Agent
Atari GamesAtari 2600 Private EyeScore3036.5Intrinsic Reward Agent
Video GamesAtari 2600 FreewayScore32.8Intrinsic Reward Agent
Video GamesAtari 2600 Montezuma's RevengeScore2504.6Intrinsic Reward Agent
Video GamesAtari 2600 GravitarScore1165.1Intrinsic Reward Agent
Video GamesAtari 2600 VentureScore416Intrinsic Reward Agent
Video GamesAtari 2600 Private EyeScore3036.5Intrinsic Reward Agent

Related Papers

Multi-Strategy Improved Snake Optimizer Accelerated CNN-LSTM-Attention-Adaboost for Trajectory Prediction2025-07-21CUDA-L1: Improving CUDA Optimization via Contrastive Reinforcement Learning2025-07-18VisionThink: Smart and Efficient Vision Language Model via Reinforcement Learning2025-07-17Spectral Bellman Method: Unifying Representation and Exploration in RL2025-07-17Aligning Humans and Robots via Reinforcement Learning from Implicit Human Feedback2025-07-17VAR-MATH: Probing True Mathematical Reasoning in Large Language Models via Symbolic Multi-Instance Benchmarks2025-07-17QuestA: Expanding Reasoning Capacity in LLMs via Question Augmentation2025-07-17Inverse Reinforcement Learning Meets Large Language Model Post-Training: Basics, Advances, and Opportunities2025-07-17