Tasks
SotA
Datasets
Papers
Methods
Submit
About
SotA
/
Playing Games
/
Video Games
/
Atari 2600 Robotank
Video Games on Atari 2600 Robotank
Metric: Score (higher is better)
Leaderboard
Dataset
Loading chart...
Results
Submit a result
Export CSV
#
Model
↕
Score
▼
Extra Data
Paper
Date
↕
Code
1
MuZero
131.13
No
Mastering Atari, Go, Chess and Shogi by Planning...
2019-11-19
Code
2
Agent57
127.32
No
Agent57: Outperforming the Atari Human Benchmark
2020-03-30
Code
3
GDI-H3
113.4
No
Generalized Data Distribution Iteration
2022-06-07
-
4
GDI-I3
108.2
No
Generalized Data Distribution Iteration
2022-06-07
-
5
GDI-I3
108.2
No
Generalized Data Distribution Iteration
2022-06-07
-
6
MuZero (Res2 Adam)
100.59
No
Online and Offline Reinforcement Learning by Pla...
2021-04-13
Code
7
R2D2
100.4
No
-
-
Code
8
DreamerV2
78
No
Mastering Atari with Discrete World Models
2020-10-05
Code
9
FQF
75.7
No
Fully Parameterized Quantile Function for Distri...
2019-11-05
Code
10
Ape-X
73.8
No
Distributed Prioritized Experience Replay
2018-03-02
Code
11
Advantage Learning
69.31
No
Increasing the Action Gap: New Operators for Rei...
2015-12-15
Code
12
Bootstrapped DQN
66.6
No
Deep Exploration via Bootstrapped DQN
2016-02-15
Code
13
ASL DDQN
65.8
No
Train a Real-world Local Path Planner in One Hou...
2023-05-07
Code
14
Duel noop
65.3
No
Dueling Network Architectures for Deep Reinforce...
2015-11-20
Code
15
DDQN (tuned) noop
65.1
No
Dueling Network Architectures for Deep Reinforce...
2015-11-20
Code
16
DNA
64.8
No
DNA: Proximal Policy Optimization with a Dual Ne...
2022-06-20
Code
17
DDQN+Pop-Art noop
64.3
No
Learning values across many orders of magnitude
2016-02-24
-
18
NoisyNet-Dueling
64
No
Noisy Networks for Exploration
2017-06-30
Code
19
DQN noop
63.9
No
Deep Reinforcement Learning with Double Q-learning
2015-09-22
Code
20
Prior noop
62.6
No
Prioritized Experience Replay
2015-11-18
Code
21
IQN
62.5
No
Implicit Quantile Networks for Distributional Re...
2018-06-14
Code
22
Duel hs
62
No
Dueling Network Architectures for Deep Reinforce...
2015-11-20
Code
23
Gorila
61.8
No
Massively Parallel Methods for Deep Reinforcemen...
2015-07-15
Code
24
QR-DQN-1
59.4
No
Distributional Reinforcement Learning with Quant...
2017-10-27
Code
25
DDQN (tuned) hs
59.1
No
Deep Reinforcement Learning with Double Q-learning
2015-09-22
Code
26
DQN hs
58.7
No
Deep Reinforcement Learning with Double Q-learning
2015-09-22
Code
27
Prior hs
56.2
No
Prioritized Experience Replay
2015-11-18
Code
28
C51 noop
52.3
No
A Distributional Perspective on Reinforcement Le...
2017-07-21
Code
29
Nature DQN
51.6
No
-
-
Code
30
UCT
50.4
No
The Arcade Learning Environment: An Evaluation P...
2012-07-19
Code
31
A3C FF hs
32.8
No
Asynchronous Methods for Deep Reinforcement Lear...
2016-02-04
Code
32
Best Learner
28.7
No
The Arcade Learning Environment: An Evaluation P...
2012-07-19
Code
33
Prior+Duel noop
27.5
No
Dueling Network Architectures for Deep Reinforce...
2015-11-20
Code
34
Prior+Duel hs
24.7
No
Deep Reinforcement Learning with Double Q-learning
2015-09-22
Code
35
CGP
24.2
No
Evolving simple programs for playing Atari games
2018-06-14
Code
36
IMPALA (deep)
12.96
No
IMPALA: Scalable Distributed Deep-RL with Import...
2018-02-05
Code
37
SARSA
12.4
No
-
-
-
38
ES FF (1 hour) noop
11.9
No
Evolution Strategies as a Scalable Alternative t...
2017-03-10
Code
39
A2C + SIL
10.5
No
Self-Imitation Learning
2018-06-14
Code
40
POP3D
4.6
No
Policy Optimization With Penalized Point Probabi...
2018-07-02
Code
41
A3C LSTM hs
2.6
No
Asynchronous Methods for Deep Reinforcement Lear...
2016-02-04
Code
42
A3C FF (1 day) hs
2.3
No
Asynchronous Methods for Deep Reinforcement Lear...
2016-02-04
Code