Tasks
SotA
Datasets
Papers
Methods
Submit
About
SotA
/
Playing Games
/
Atari Games
/
Atari 2600 Name This Game
Atari Games on Atari 2600 Name This Game
Metric: Score (higher is better)
Leaderboard
Dataset
Loading chart...
Results
Submit a result
Export CSV
#
Model
↕
Score
▼
Extra Data
Paper
Date
↕
Code
1
MuZero
157177.85
No
Mastering Atari, Go, Chess and Shogi by Planning...
2019-11-19
Code
2
MuZero (Res2 Adam)
101197.71
No
Online and Offline Reinforcement Learning by Pla...
2021-04-13
Code
3
R2D2
58182.7
No
-
-
Code
4
Agent57
54386.77
No
Agent57: Outperforming the Atari Human Benchmark
2020-03-30
Code
5
GDI-H3
36296
No
Generalized Data Distribution Iteration
2022-06-07
-
6
GDI-I3
34434
No
Generalized Data Distribution Iteration
2022-06-07
-
7
GDI-I3
34434
No
Generalized Data Distribution Iteration
2022-06-07
-
8
Ape-X
25783.3
No
Distributed Prioritized Experience Replay
2018-03-02
Code
9
IQN
22682
No
Implicit Quantile Networks for Distributional Re...
2018-06-14
Code
10
QR-DQN-1
21890
No
Distributional Reinforcement Learning with Quant...
2017-10-27
Code
11
IMPALA (deep)
21537.2
No
IMPALA: Scalable Distributed Deep-RL with Import...
2018-02-05
Code
12
DNA
20226
No
DNA: Proximal Policy Optimization with a Dual Ne...
2022-06-20
Code
13
ASL DDQN
16535.4
No
Train a Real-world Local Path Planner in One Hou...
2023-05-07
Code
14
DDQN+Pop-Art noop
15851.2
No
Learning values across many orders of magnitude
2016-02-24
-
15
Prior+Duel noop
15572.5
No
Dueling Network Architectures for Deep Reinforce...
2015-11-20
Code
16
UCT
15410
No
The Arcade Learning Environment: An Evaluation P...
2012-07-19
Code
17
A2C + SIL
14958.2
No
Self-Imitation Learning
2018-06-14
Code
18
DreamerV2
14649
No
Mastering Atari with Discrete World Models
2020-10-05
Code
19
Prior+Duel hs
13637.9
No
Deep Reinforcement Learning with Double Q-learning
2015-09-22
Code
20
C51 noop
12542
No
A Distributional Perspective on Reinforcement Le...
2017-07-21
Code
21
Prior noop
12270.5
No
Prioritized Experience Replay
2015-11-18
Code
22
NoisyNet-Dueling
12211
No
Noisy Networks for Exploration
2017-06-30
Code
23
A3C LSTM hs
12093.7
No
Asynchronous Methods for Deep Reinforcement Lear...
2016-02-04
Code
24
Duel noop
11971.1
No
Dueling Network Architectures for Deep Reinforce...
2015-11-20
Code
25
Bootstrapped DQN
11501.1
No
Deep Exploration via Bootstrapped DQN
2016-02-15
Code
26
Duel hs
11185.1
No
Dueling Network Architectures for Deep Reinforce...
2015-11-20
Code
27
Advantage Learning
11025.26
No
Increasing the Action Gap: New Operators for Rei...
2015-12-15
Code
28
DDQN (tuned) noop
10616
No
Dueling Network Architectures for Deep Reinforce...
2015-11-20
Code
29
Prior hs
10497.6
No
Prioritized Experience Replay
2015-11-18
Code
30
A3C FF hs
10476.1
No
Asynchronous Methods for Deep Reinforcement Lear...
2016-02-04
Code
31
Persistent AL
10431.33
No
Increasing the Action Gap: New Operators for Rei...
2015-12-15
Code
32
Gorila
9238.5
No
Massively Parallel Methods for Deep Reinforcemen...
2015-07-15
Code
33
DDQN (tuned) hs
8960.3
No
Deep Reinforcement Learning with Double Q-learning
2015-09-22
Code
34
DQN noop
8207.8
No
Deep Reinforcement Learning with Double Q-learning
2015-09-22
Code
35
Nature DQN
7257
No
-
-
Code
36
DQN hs
6738.8
No
Deep Reinforcement Learning with Double Q-learning
2015-09-22
Code
37
POP3D
6065.63
No
Policy Optimization With Penalized Point Probabi...
2018-07-02
Code
38
A3C FF (1 day) hs
5614
No
Asynchronous Methods for Deep Reinforcement Lear...
2016-02-04
Code
39
ES FF (1 hour) noop
4503
No
Evolution Strategies as a Scalable Alternative t...
2017-03-10
Code
40
CGP
3696
No
Evolving simple programs for playing Atari games
2018-06-14
Code
41
Best Learner
2500.1
No
The Arcade Learning Environment: An Evaluation P...
2012-07-19
Code
42
SARSA
2247
No
-
-
-
43
IDVQ + DRSC + XNES
920
No
Playing Atari with Six Neurons
2018-06-04
Code