Tasks
SotA
Datasets
Papers
Methods
Submit
About
SotA
/
Playing Games
/
Atari Games
/
Atari 2600 Centipede
Atari Games on Atari 2600 Centipede
Metric: Score (higher is better)
Leaderboard
Dataset
Loading chart...
Results
Submit a result
Export CSV
#
Model
↕
Score
▼
Extra Data
Paper
Date
↕
Code
1
Go-Explore
1422628
No
First return, then explore
2020-04-27
Code
2
GDI-H3(1B frames)
1359533
No
-
-
-
3
MuZero
1159049.27
No
Mastering Atari, Go, Chess and Shogi by Planning...
2019-11-19
Code
4
MuZero (Res2 Adam)
874301.64
No
Online and Offline Reinforcement Learning by Pla...
2021-04-13
Code
5
R2D2
599140.3
No
-
-
Code
6
Agent57
412847.86
No
Agent57: Outperforming the Atari Human Benchmark
2020-03-30
Code
7
GDI-H3
195630
No
Generalized Data Distribution Iteration
2022-06-07
-
8
GDI-I3
155830
No
Generalized Data Distribution Iteration
2022-06-07
-
9
GDI-I3
155830
No
Generalized Data Distribution Iteration
2022-06-07
-
10
Full Tree
125123
No
The Arcade Learning Environment: An Evaluation P...
2012-07-19
Code
11
DNA
100194
No
DNA: Proximal Policy Optimization with a Dual Ne...
2022-06-20
Code
12
DDQN+Pop-Art noop
49065.8
No
Learning values across many orders of magnitude
2016-02-24
-
13
CGP
24708
No
Evolving simple programs for playing Atari games
2018-06-14
Code
14
Ape-X
12974
No
Distributed Prioritized Experience Replay
2018-03-02
Code
15
QR-DQN-1
12447
No
Distributional Reinforcement Learning with Quant...
2017-10-27
Code
16
DreamerV2
11883
No
Mastering Atari with Discrete World Models
2020-10-05
Code
17
IQN
11561
No
Implicit Quantile Networks for Distributional Re...
2018-06-14
Code
18
IMPALA (deep)
11049.75
No
IMPALA: Scalable Distributed Deep-RL with Import...
2018-02-05
Code
19
C51 noop
9646
No
A Distributional Perspective on Reinforcement Le...
2017-07-21
Code
20
Best Learner
8803.8
No
The Arcade Learning Environment: An Evaluation P...
2012-07-19
Code
21
Nature DQN
8309
No
-
-
Code
22
ES FF (1 hour) noop
7783.9
No
Evolution Strategies as a Scalable Alternative t...
2017-03-10
Code
23
Prior+Duel noop
7687.5
No
Dueling Network Architectures for Deep Reinforce...
2015-11-20
Code
24
NoisyNet-Dueling
7596
No
Noisy Networks for Exploration
2017-06-30
Code
25
Duel noop
7561.4
No
Dueling Network Architectures for Deep Reinforce...
2015-11-20
Code
26
A2C + SIL
7559.5
No
Self-Imitation Learning
2018-06-14
Code
27
Gorila
6296.9
No
Massively Parallel Methods for Deep Reinforcemen...
2015-07-15
Code
28
Prior+Duel hs
5570.2
No
Deep Reinforcement Learning with Double Q-learning
2015-09-22
Code
29
DDQN (tuned) noop
5409.4
No
Dueling Network Architectures for Deep Reinforce...
2015-11-20
Code
30
Duel hs
4881
No
Dueling Network Architectures for Deep Reinforce...
2015-11-20
Code
31
DQN noop
4657.7
No
Deep Reinforcement Learning with Double Q-learning
2015-09-22
Code
32
SARSA
4647
No
-
-
-
33
Bootstrapped DQN
4553.5
No
Deep Exploration via Bootstrapped DQN
2016-02-15
Code
34
Persistent AL
4539.55
No
Increasing the Action Gap: New Operators for Rei...
2015-12-15
Code
35
Prior noop
4463.2
No
Prioritized Experience Replay
2015-11-18
Code
36
Advantage Learning
4225.18
No
Increasing the Action Gap: New Operators for Rei...
2015-12-15
Code
37
DQN hs
3973.9
No
Deep Reinforcement Learning with Double Q-learning
2015-09-22
Code
38
ASL DDQN
3899.8
No
Train a Real-world Local Path Planner in One Hou...
2023-05-07
Code
39
DDQN (tuned) hs
3853.5
No
Deep Reinforcement Learning with Double Q-learning
2015-09-22
Code
40
A3C FF hs
3755.8
No
Asynchronous Methods for Deep Reinforcement Lear...
2016-02-04
Code
41
Prior hs
3489.1
No
Prioritized Experience Replay
2015-11-18
Code
42
Reactor 500M
3422
No
The Reactor: A fast and sample-efficient Actor-C...
2017-04-15
-
43
POP3D
3315.44
No
Policy Optimization With Penalized Point Probabi...
2018-07-02
Code
44
A3C FF (1 day) hs
3306.5
No
Asynchronous Methods for Deep Reinforcement Lear...
2016-02-04
Code
45
A3C LSTM hs
1997
No
Asynchronous Methods for Deep Reinforcement Lear...
2016-02-04
Code