Tasks
SotA
Datasets
Papers
Methods
Submit
About
SotA
/
Playing Games
/
Atari Games
/
Atari 2600 Freeway
Atari Games on Atari 2600 Freeway
Metric: Score (higher is better)
Leaderboard
Dataset
Loading chart...
Results
Submit a result
Export CSV
#
Model
↕
Score
▼
Extra Data
Paper
Date
↕
Code
1
TRPO-hash
34
No
#Exploration: A Study of Count-Based Exploration...
2016-11-15
Code
2
IQN
34
No
Implicit Quantile Networks for Distributional Re...
2018-06-14
Code
3
NoisyNet-Dueling
34
No
Noisy Networks for Exploration
2017-06-30
Code
4
QR-DQN-1
34
No
Distributional Reinforcement Learning with Quant...
2017-10-27
Code
5
Go-Explore
34
No
First return, then explore
2020-04-27
Code
6
GDI-I3
34
No
GDI: Rethinking What Makes Reinforcement Learnin...
2021-06-11
-
7
GDI-I3
34
No
GDI: Rethinking What Makes Reinforcement Learnin...
2021-06-11
-
8
GDI-H3(200M frames)
34
No
Generalized Data Distribution Iteration
2022-06-07
-
9
GDI-H3
34
No
Generalized Data Distribution Iteration
2022-06-07
-
10
C51 noop
33.9
No
A Distributional Perspective on Reinforcement Le...
2017-07-21
Code
11
Bootstrapped DQN
33.9
No
Deep Exploration via Bootstrapped DQN
2016-02-15
Code
12
ASL DDQN
33.9
No
Train a Real-world Local Path Planner in One Hou...
2023-05-07
Code
13
MuZero (Res2 Adam)
33.87
No
Online and Offline Reinforcement Learning by Pla...
2021-04-13
Code
14
Prior noop
33.7
No
Prioritized Experience Replay
2015-11-18
Code
15
Ape-X
33.7
No
Distributed Prioritized Experience Replay
2018-03-02
Code
16
DDQN+Pop-Art noop
33.4
No
Learning values across many orders of magnitude
2016-02-24
-
17
DDQN (tuned) noop
33.3
No
Dueling Network Architectures for Deep Reinforce...
2015-11-20
Code
18
MuZero
33.03
No
Mastering Atari, Go, Chess and Shogi by Planning...
2019-11-19
Code
19
Prior+Duel noop
33
No
Dueling Network Architectures for Deep Reinforce...
2015-11-20
Code
20
DQN-CTS
33
No
Count-Based Exploration with Neural Density Models
2017-03-03
Code
21
DreamerV2
33
No
Mastering Atari with Discrete World Models
2020-10-05
Code
22
DNA
33
No
DNA: Proximal Policy Optimization with a Dual Ne...
2022-06-20
Code
23
Intrinsic Reward Agent
32.8
No
Large-Scale Study of Curiosity-Driven Learning
2018-08-13
Code
24
Agent57
32.59
No
Agent57: Outperforming the Atari Human Benchmark
2020-03-30
Code
25
R2D2
32.5
No
-
-
Code
26
Persistent AL
32.3
No
Increasing the Action Gap: New Operators for Rei...
2015-12-15
Code
27
A2C + SIL
32.2
No
Self-Imitation Learning
2018-06-14
Code
28
Advantage Learning
31.72
No
Increasing the Action Gap: New Operators for Rei...
2015-12-15
Code
29
DQN-PixelCNN
31.7
No
Count-Based Exploration with Neural Density Models
2017-03-03
Code
30
ES FF (1 hour) noop
31
No
Evolution Strategies as a Scalable Alternative t...
2017-03-10
Code
31
DQN noop
30.8
No
Deep Reinforcement Learning with Double Q-learning
2015-09-22
Code
32
A3C-CTS
30.48
No
Unifying Count-Based Exploration and Intrinsic M...
2016-06-06
Code
33
Nature DQN
30.3
No
-
-
Code
34
Sarsa-ε
29.9
No
Count-Based Exploration in Feature Space for Rei...
2017-06-25
Code
35
DQNMMCe
29.5
No
Count-Based Exploration with the Successor Repre...
2018-07-31
Code
36
Discrete Latent Space World Model (VQ-VAE)
29
No
Smaller World Models for Reinforcement Learning
2020-10-12
-
37
Prior hs
28.9
No
Prioritized Experience Replay
2015-11-18
Code
38
DDQN (tuned) hs
28.8
No
Deep Reinforcement Learning with Double Q-learning
2015-09-22
Code
39
Prior+Duel hs
28.2
No
Deep Reinforcement Learning with Double Q-learning
2015-09-22
Code
40
CGP
28.2
No
Evolving simple programs for playing Atari games
2018-06-14
Code
41
CURL
27.9
No
CURL: Contrastive Unsupervised Representations f...
2020-04-08
Code
42
MP-EB
27
No
Incentivizing Exploration In Reinforcement Learn...
2015-07-03
Code
43
DQN hs
26.9
No
Deep Reinforcement Learning with Double Q-learning
2015-09-22
Code
44
Best Baseline
22.5
No
The Arcade Learning Environment: An Evaluation P...
2012-07-19
Code
45
ENAS
22
No
Optimizing the Neural Architecture of Reinforcem...
2020-11-30
Code
46
SPOS
22
No
Optimizing the Neural Architecture of Reinforcem...
2020-11-30
Code
47
POP3D
21.21
No
Policy Optimization With Penalized Point Probabi...
2018-07-02
Code
48
SARSA
19.7
No
-
-
-
49
Best Learner
19.1
No
The Arcade Learning Environment: An Evaluation P...
2012-07-19
Code
50
Gorila
10.2
No
Massively Parallel Methods for Deep Reinforcemen...
2015-07-15
Code
51
SAC
4.4
No
Soft Actor-Critic for Discrete Action Settings
2019-10-16
Code
52
UCT
0.4
No
The Arcade Learning Environment: An Evaluation P...
2012-07-19
Code
53
Duel hs
0.2
No
Dueling Network Architectures for Deep Reinforce...
2015-11-20
Code
54
A3C FF (1 day) hs
0.1
No
Asynchronous Methods for Deep Reinforcement Lear...
2016-02-04
Code
55
A3C FF hs
0.1
No
Asynchronous Methods for Deep Reinforcement Lear...
2016-02-04
Code
56
A3C LSTM hs
0.1
No
Asynchronous Methods for Deep Reinforcement Lear...
2016-02-04
Code
57
Duel noop
0
No
-
-
Code
58
Sarsa-φ-EB
0
No
-
-
Code
59
IMPALA (deep)
0
No
-
-
Code