TasksSotADatasetsPapersMethodsSubmitAbout
Papers With Code 2

A community resource for machine learning research: papers, code, benchmarks, and state-of-the-art results.

Explore

Notable BenchmarksAll SotADatasetsPapersMethods

Community

Submit ResultsAbout

Data sourced from the PWC Archive (CC-BY-SA 4.0). Built by the community, for the community.

Papers/Count-Based Exploration in Feature Space for Reinforcement...

Count-Based Exploration in Feature Space for Reinforcement Learning

Jarryd Martin, Suraj Narayanan Sasikumar, Tom Everitt, Marcus Hutter

2017-06-25Reinforcement LearningAtari GamesEfficient Explorationreinforcement-learning
PaperPDFCode(official)

Abstract

We introduce a new count-based optimistic exploration algorithm for Reinforcement Learning (RL) that is feasible in environments with high-dimensional state-action spaces. The success of RL algorithms in these domains depends crucially on generalisation from limited training experience. Function approximation techniques enable RL agents to generalise in order to estimate the value of unvisited states, but at present few methods enable generalisation regarding uncertainty. This has prevented the combination of scalable RL algorithms with efficient exploration strategies that drive the agent to reduce its uncertainty. We present a new method for computing a generalised state visit-count, which allows the agent to estimate the uncertainty associated with any state. Our \phi-pseudocount achieves generalisation by exploiting same feature representation of the state space that is used for value function approximation. States that have less frequently observed features are deemed more uncertain. The \phi-Exploration-Bonus algorithm rewards the agent for exploring in feature space rather than in the untransformed state space. The method is simpler and less computationally expensive than some previous proposals, and achieves near state-of-the-art results on high-dimensional RL benchmarks.

Results

TaskDatasetMetricValueModel
Atari GamesAtari 2600 FreewayScore29.9Sarsa-ε
Atari GamesAtari 2600 FrostbiteScore2770.1Sarsa-φ-EB
Atari GamesAtari 2600 FrostbiteScore1394.3Sarsa-ε
Atari GamesAtari 2600 Montezuma's RevengeScore2745.4Sarsa-φ-EB
Atari GamesAtari 2600 Montezuma's RevengeScore399.5Sarsa-ε
Atari GamesAtari 2600 VentureScore1169.2Sarsa-φ-EB
Atari GamesAtari 2600 Q*BertScore4111.8Sarsa-φ-EB
Atari GamesAtari 2600 Q*BertScore3895.3Sarsa-ε
Video GamesAtari 2600 FreewayScore29.9Sarsa-ε
Video GamesAtari 2600 FrostbiteScore2770.1Sarsa-φ-EB
Video GamesAtari 2600 FrostbiteScore1394.3Sarsa-ε
Video GamesAtari 2600 Montezuma's RevengeScore2745.4Sarsa-φ-EB
Video GamesAtari 2600 Montezuma's RevengeScore399.5Sarsa-ε
Video GamesAtari 2600 VentureScore1169.2Sarsa-φ-EB
Video GamesAtari 2600 Q*BertScore4111.8Sarsa-φ-EB
Video GamesAtari 2600 Q*BertScore3895.3Sarsa-ε

Related Papers

CUDA-L1: Improving CUDA Optimization via Contrastive Reinforcement Learning2025-07-18VisionThink: Smart and Efficient Vision Language Model via Reinforcement Learning2025-07-17Spectral Bellman Method: Unifying Representation and Exploration in RL2025-07-17Aligning Humans and Robots via Reinforcement Learning from Implicit Human Feedback2025-07-17VAR-MATH: Probing True Mathematical Reasoning in Large Language Models via Symbolic Multi-Instance Benchmarks2025-07-17QuestA: Expanding Reasoning Capacity in LLMs via Question Augmentation2025-07-17Inverse Reinforcement Learning Meets Large Language Model Post-Training: Basics, Advances, and Opportunities2025-07-17Autonomous Resource Management in Microservice Systems via Reinforcement Learning2025-07-17