Tasks
SotA
Datasets
Papers
Methods
Submit
About
SotA
/
Playing Games
/
Atari Games
/
Atari 2600 Tutankham
Atari Games on Atari 2600 Tutankham
Metric: Score (higher is better)
Leaderboard
Dataset
Loading chart...
Results
Submit a result
Export CSV
#
Model
↕
Score
▼
Extra Data
Paper
Date
↕
Code
1
Agent57
2354.91
No
Agent57: Outperforming the Atari Human Benchmark
2020-03-30
Code
2
MuZero
491.48
No
Mastering Atari, Go, Chess and Shogi by Planning...
2019-11-19
Code
3
GDI-I3
423.9
No
Generalized Data Distribution Iteration
2022-06-07
-
4
GDI-I3
423.9
No
Generalized Data Distribution Iteration
2022-06-07
-
5
GDI-H3
418.2
No
Generalized Data Distribution Iteration
2022-06-07
-
6
R2D2
395.3
No
-
-
Code
7
MuZero (Res2 Adam)
347.99
No
Online and Offline Reinforcement Learning by Pla...
2021-04-13
Code
8
A2C + SIL
340.5
No
Self-Imitation Learning
2018-06-14
Code
9
QR-DQN-1
297
No
Distributional Reinforcement Learning with Quant...
2017-10-27
Code
10
IQN
293
No
Implicit Quantile Networks for Distributional Re...
2018-06-14
Code
11
IMPALA (deep)
292.11
No
IMPALA: Scalable Distributed Deep-RL with Import...
2018-02-05
Code
12
C51 noop
280
No
A Distributional Perspective on Reinforcement Le...
2017-07-21
Code
13
Ape-X
272.6
No
Distributed Prioritized Experience Replay
2018-03-02
Code
14
NoisyNet-Dueling
269
No
Noisy Networks for Exploration
2017-06-30
Code
15
DreamerV2
264
No
Mastering Atari with Discrete World Models
2020-10-05
Code
16
ASL DDQN
252.9
No
Train a Real-world Local Path Planner in One Hou...
2023-05-07
Code
17
Prior+Duel noop
245.9
No
Dueling Network Architectures for Deep Reinforce...
2015-11-20
Code
18
Advantage Learning
245.22
No
Increasing the Action Gap: New Operators for Rei...
2015-12-15
Code
19
POP3D
241.21
No
Policy Optimization With Penalized Point Probabi...
2018-07-02
Code
20
UCT
225.5
No
The Arcade Learning Environment: An Evaluation P...
2012-07-19
Code
21
DDQN (tuned) noop
218.4
No
Dueling Network Architectures for Deep Reinforce...
2015-11-20
Code
22
Bootstrapped DQN
214.8
No
Deep Exploration via Bootstrapped DQN
2016-02-15
Code
23
Duel noop
211.4
No
Dueling Network Architectures for Deep Reinforce...
2015-11-20
Code
24
Prior noop
204.6
No
Prioritized Experience Replay
2015-11-18
Code
25
DARQN soft
197
No
Deep Attention Recurrent Q-Network
2015-12-05
Code
26
Nature DQN
186.7
No
-
-
Code
27
Recurrent Rational DQN Average
184
No
Adaptive Rational Activations to Boost Deep Rein...
2021-02-18
Code
28
DDQN+Pop-Art noop
183.9
No
Learning values across many orders of magnitude
2016-02-24
-
29
Rational DQN Average
179
No
Adaptive Rational Activations to Boost Deep Rein...
2021-02-18
Code
30
A3C FF hs
156.3
No
Asynchronous Methods for Deep Reinforcement Lear...
2016-02-04
Code
31
A3C LSTM hs
144.2
No
Asynchronous Methods for Deep Reinforcement Lear...
2016-02-04
Code
32
ES FF (1 hour) noop
130.3
No
Evolution Strategies as a Scalable Alternative t...
2017-03-10
Code
33
DNA
127
No
DNA: Proximal Policy Optimization with a Dual Ne...
2022-06-20
Code
34
Gorila
118.5
No
Massively Parallel Methods for Deep Reinforcemen...
2015-07-15
Code
35
Best Learner
114.3
No
The Arcade Learning Environment: An Evaluation P...
2012-07-19
Code
36
Prior+Duel hs
108.6
No
Deep Reinforcement Learning with Double Q-learning
2015-09-22
Code
37
SARSA
98.2
No
-
-
-
38
DDQN (tuned) hs
92.2
No
Deep Reinforcement Learning with Double Q-learning
2015-09-22
Code
39
DQN noop
68.1
No
Deep Reinforcement Learning with Double Q-learning
2015-09-22
Code
40
Prior hs
56.9
No
Prioritized Experience Replay
2015-11-18
Code
41
Duel hs
48
No
Dueling Network Architectures for Deep Reinforce...
2015-11-20
Code
42
DQN hs
45.6
No
Deep Reinforcement Learning with Double Q-learning
2015-09-22
Code
43
A3C FF (1 day) hs
26.1
No
Asynchronous Methods for Deep Reinforcement Lear...
2016-02-04
Code
44
CGP
0
No
-
-
Code