Tasks
SotA
Datasets
Papers
Methods
Submit
About
SotA
/
Playing Games
/
Atari Games
/
Atari 2600 Berzerk
Atari Games on Atari 2600 Berzerk
Metric: Score (higher is better)
Leaderboard
Dataset
Loading chart...
Results
Submit a result
Export CSV
#
Model
↕
Score
▼
Extra Data
Paper
Date
↕
Code
1
Go-Explore
197376
No
First return, then explore
2020-04-27
Code
2
MuZero
85932.6
No
Mastering Atari, Go, Chess and Shogi by Planning...
2019-11-19
Code
3
Agent57
61507.83
No
Agent57: Outperforming the Atari Human Benchmark
2020-03-30
Code
4
Ape-X
57196.7
No
Distributed Prioritized Experience Replay
2018-03-02
Code
5
R2D2
53318.7
No
-
-
Code
6
DNA
19789
No
DNA: Proximal Policy Optimization with a Dual Ne...
2022-06-20
Code
7
GDI-H3
14649
No
Generalized Data Distribution Iteration
2022-06-07
-
8
FQF
12422.2
No
Fully Parameterized Quantile Function for Distri...
2019-11-05
Code
9
GDI-I3
7607
No
Generalized Data Distribution Iteration
2022-06-07
-
10
GDI-I3
7607
No
Generalized Data Distribution Iteration
2022-06-07
-
11
Prior+Duel noop
3409
No
Dueling Network Architectures for Deep Reinforce...
2015-11-20
Code
12
QR-DQN-1
3117
No
Distributional Reinforcement Learning with Quant...
2017-10-27
Code
13
MuZero (Res2 Adam)
2705.82
No
Online and Offline Reinforcement Learning by Pla...
2021-04-13
Code
14
ASL DDQN
2597.2
No
Train a Real-world Local Path Planner in One Hou...
2023-05-07
Code
15
Reactor 500M
2303.1
No
The Reactor: A fast and sample-efficient Actor-C...
2017-04-15
-
16
Prior+Duel hs
2178.6
No
Deep Reinforcement Learning with Double Q-learning
2015-09-22
Code
17
NoisyNet-Dueling
1896
No
Noisy Networks for Exploration
2017-06-30
Code
18
IMPALA (deep)
1852.7
No
IMPALA: Scalable Distributed Deep-RL with Import...
2018-02-05
Code
19
C51 noop
1645
No
A Distributional Perspective on Reinforcement Le...
2017-07-21
Code
20
Duel noop
1472.6
No
Dueling Network Architectures for Deep Reinforce...
2015-11-20
Code
21
A3C FF (1 day) hs
1433.4
No
Asynchronous Methods for Deep Reinforcement Lear...
2016-02-04
Code
22
Persistent AL
1328.25
No
Increasing the Action Gap: New Operators for Rei...
2015-12-15
Code
23
Prior noop
1305.6
No
Prioritized Experience Replay
2015-11-18
Code
24
DDQN (tuned) noop
1225.4
No
Dueling Network Architectures for Deep Reinforce...
2015-11-20
Code
25
DDQN+Pop-Art noop
1199.6
No
Learning values across many orders of magnitude
2016-02-24
-
26
CGP
1138
No
Evolving simple programs for playing Atari games
2018-06-14
Code
27
IQN
1053
No
Implicit Quantile Networks for Distributional Re...
2018-06-14
Code
28
DDQN (tuned) hs
1011.1
No
Deep Reinforcement Learning with Double Q-learning
2015-09-22
Code
29
Duel hs
910.6
No
Dueling Network Architectures for Deep Reinforce...
2015-11-20
Code
30
Prior hs
865.9
No
Prioritized Experience Replay
2015-11-18
Code
31
A3C LSTM hs
862.2
No
Asynchronous Methods for Deep Reinforcement Lear...
2016-02-04
Code
32
A3C FF hs
817.9
No
Asynchronous Methods for Deep Reinforcement Lear...
2016-02-04
Code
33
DreamerV2
810
No
Mastering Atari with Discrete World Models
2020-10-05
Code
34
Advantage Learning
747.26
No
Increasing the Action Gap: New Operators for Rei...
2015-12-15
Code
35
ES FF (1 hour) noop
686
No
Evolution Strategies as a Scalable Alternative t...
2017-03-10
Code
36
Best Baseline
670
No
The Arcade Learning Environment: An Evaluation P...
2012-07-19
Code
37
DQN noop
585.6
No
Deep Reinforcement Learning with Double Q-learning
2015-09-22
Code
38
Best Learner
501.3
No
The Arcade Learning Environment: An Evaluation P...
2012-07-19
Code
39
DQN hs
493.4
No
Deep Reinforcement Learning with Double Q-learning
2015-09-22
Code