Tasks
SotA
Datasets
Papers
Methods
Submit
About
SotA
/
Playing Games
/
Video Games
/
Atari 2600 Bowling
Video Games on Atari 2600 Bowling
Metric: Score (higher is better)
Leaderboard
Dataset
Loading chart...
Results
Submit a result
Export CSV
#
Model
↕
Score
▼
Extra Data
Paper
Date
↕
Code
1
MuZero
260.13
No
Mastering Atari, Go, Chess and Shogi by Planning...
2019-11-19
Code
2
Go-Explore
260
No
First return, then explore
2020-04-27
Code
3
Agent57
251.18
No
Agent57: Outperforming the Atari Human Benchmark
2020-03-30
Code
4
R2D2
219.5
No
-
-
Code
5
GDI-H3
205.2
No
Generalized Data Distribution Iteration
2022-06-07
-
6
GDI-I3
201.9
No
Generalized Data Distribution Iteration
2022-06-07
-
7
GDI-I3
201.9
No
Generalized Data Distribution Iteration
2022-06-07
-
8
DNA
181
No
DNA: Proximal Policy Optimization with a Dual Ne...
2022-06-20
Code
9
RUDDER
179
No
RUDDER: Return Decomposition for Delayed Rewards
2018-06-20
Code
10
MuZero (Res2 Adam)
131.65
No
Online and Offline Reinforcement Learning by Pla...
2021-04-13
Code
11
FQF
102.3
No
Fully Parameterized Quantile Function for Distri...
2019-11-05
Code
12
DDQN+Pop-Art noop
102.1
No
Learning values across many orders of magnitude
2016-02-24
-
13
IQN
86.5
No
Implicit Quantile Networks for Distributional Re...
2018-06-14
Code
14
CGP
85.8
No
Evolving simple programs for playing Atari games
2018-06-14
Code
15
C51 noop
81.8
No
A Distributional Perspective on Reinforcement Le...
2017-07-21
Code
16
Reactor 500M
81
No
The Reactor: A fast and sample-efficient Actor-C...
2017-04-15
-
17
QR-DQN-1
77.2
No
Distributional Reinforcement Learning with Quant...
2017-10-27
Code
18
Persistent AL
71.59
No
Increasing the Action Gap: New Operators for Rei...
2015-12-15
Code
19
DDQN (tuned) hs
69.6
No
Deep Reinforcement Learning with Double Q-learning
2015-09-22
Code
20
DDQN (tuned) noop
68.1
No
Dueling Network Architectures for Deep Reinforce...
2015-11-20
Code
21
Duel hs
65.7
No
Dueling Network Architectures for Deep Reinforce...
2015-11-20
Code
22
Duel noop
65.5
No
Dueling Network Architectures for Deep Reinforce...
2015-11-20
Code
23
ASL DDQN
62.4
No
Train a Real-world Local Path Planner in One Hou...
2023-05-07
Code
24
Bootstrapped DQN
60.2
No
Deep Exploration via Bootstrapped DQN
2016-02-15
Code
25
IMPALA (deep)
59.92
No
IMPALA: Scalable Distributed Deep-RL with Import...
2018-02-05
Code
26
Advantage Learning
57.41
No
Increasing the Action Gap: New Operators for Rei...
2015-12-15
Code
27
DQN hs
56.5
No
Deep Reinforcement Learning with Double Q-learning
2015-09-22
Code
28
Gorila
54
No
Massively Parallel Methods for Deep Reinforcemen...
2015-07-15
Code
29
Prior hs
52
No
Prioritized Experience Replay
2015-11-18
Code
30
DQN noop
50.4
No
Deep Reinforcement Learning with Double Q-learning
2015-09-22
Code
31
Prior+Duel hs
50.4
No
Deep Reinforcement Learning with Double Q-learning
2015-09-22
Code
32
DreamerV2
49
No
Mastering Atari with Discrete World Models
2020-10-05
Code
33
Prior noop
47.9
No
Prioritized Experience Replay
2015-11-18
Code
34
Prior+Duel noop
46.7
No
Dueling Network Architectures for Deep Reinforce...
2015-11-20
Code
35
Best Learner
43.9
No
The Arcade Learning Environment: An Evaluation P...
2012-07-19
Code
36
Nature DQN
42.4
No
-
-
Code
37
A3C LSTM hs
41.8
No
Asynchronous Methods for Deep Reinforcement Lear...
2016-02-04
Code
38
POP3D
38.99
No
Policy Optimization With Penalized Point Probabi...
2018-07-02
Code
39
SARSA
36.4
No
-
-
-
40
A3C FF (1 day) hs
36.2
No
Asynchronous Methods for Deep Reinforcement Lear...
2016-02-04
Code
41
A3C FF hs
35.1
No
Asynchronous Methods for Deep Reinforcement Lear...
2016-02-04
Code
42
A2C + SIL
31.1
No
Self-Imitation Learning
2018-06-14
Code
43
ES FF (1 hour) noop
30
No
Evolution Strategies as a Scalable Alternative t...
2017-03-10
Code
44
Ape-X
17.6
No
Distributed Prioritized Experience Replay
2018-03-02
Code