Tasks
SotA
Datasets
Papers
Methods
Submit
About
SotA
/
Playing Games
/
Atari Games
/
Atari 2600 Kung-Fu Master
Atari Games on Atari 2600 Kung-Fu Master
Metric: Score (higher is better)
Leaderboard
Dataset
Loading chart...
Results
Submit a result
Export CSV
#
Model
↕
Score
▼
Extra Data
Paper
Date
↕
Code
1
GDI-H3
1666665
No
Generalized Data Distribution Iteration
2022-06-07
-
2
GDI-H3 (200M)
1666000
No
-
-
-
3
R2D2
233413.3
No
-
-
Code
4
Agent57
206845.82
No
Agent57: Outperforming the Atari Human Benchmark
2020-03-30
Code
5
MuZero
204824
No
Mastering Atari, Go, Chess and Shogi by Planning...
2019-11-19
Code
6
GDI-I3
140440
No
Generalized Data Distribution Iteration
2022-06-07
-
7
MuZero (Res2 Adam)
116726.96
No
Online and Offline Reinforcement Learning by Pla...
2021-04-13
Code
8
FQF
111138.5
No
Fully Parameterized Quantile Function for Distri...
2019-11-05
Code
9
DNA
110962
No
DNA: Proximal Policy Optimization with a Dual Ne...
2022-06-20
Code
10
Ape-X
97829.5
No
Distributed Prioritized Experience Replay
2018-03-02
Code
11
ASL DDQN
85182
No
Train a Real-world Local Path Planner in One Hou...
2023-05-07
Code
12
QR-DQN-1
76642
No
Distributional Reinforcement Learning with Quant...
2017-10-27
Code
13
IQN
73512
No
Implicit Quantile Networks for Distributional Re...
2018-06-14
Code
14
DreamerV2
62741
No
Mastering Atari with Discrete World Models
2020-10-05
Code
15
CGP
57400
No
Evolving simple programs for playing Atari games
2018-06-14
Code
16
UCT
48854.5
No
The Arcade Learning Environment: An Evaluation P...
2012-07-19
Code
17
Prior+Duel noop
48375
No
Dueling Network Architectures for Deep Reinforce...
2015-11-20
Code
18
C51 noop
48192
No
A Distributional Perspective on Reinforcement Le...
2017-07-21
Code
19
IMPALA (deep)
43375.5
No
IMPALA: Scalable Distributed Deep-RL with Import...
2018-02-05
Code
20
NoisyNet-Dueling
41672
No
Noisy Networks for Exploration
2017-06-30
Code
21
A3C LSTM hs
40835
No
Asynchronous Methods for Deep Reinforcement Lear...
2016-02-04
Code
22
Prior noop
39581
No
Prioritized Experience Replay
2015-11-18
Code
23
Prior+Duel hs
37484
No
Deep Reinforcement Learning with Double Q-learning
2015-09-22
Code
24
Bootstrapped DQN
36733.3
No
Deep Exploration via Bootstrapped DQN
2016-02-15
Code
25
Persistent AL
34650.91
No
Increasing the Action Gap: New Operators for Rei...
2015-12-15
Code
26
A2C + SIL
34449.2
No
Self-Imitation Learning
2018-06-14
Code
27
DDQN+Pop-Art noop
34393
No
Learning values across many orders of magnitude
2016-02-24
-
28
Duel noop
34294
No
Dueling Network Architectures for Deep Reinforce...
2015-11-20
Code
29
POP3D
33728
No
Policy Optimization With Penalized Point Probabi...
2018-07-02
Code
30
Advantage Learning
32182.99
No
Increasing the Action Gap: New Operators for Rei...
2015-12-15
Code
31
Prior hs
31676
No
Prioritized Experience Replay
2015-11-18
Code
32
DDQN (tuned) hs
30207
No
Deep Reinforcement Learning with Double Q-learning
2015-09-22
Code
33
DDQN (tuned) noop
29710
No
Dueling Network Architectures for Deep Reinforce...
2015-11-20
Code
34
SARSA
29151
No
-
-
-
35
A3C FF hs
28819
No
Asynchronous Methods for Deep Reinforcement Lear...
2016-02-04
Code
36
DQN noop
26059
No
Deep Reinforcement Learning with Double Q-learning
2015-09-22
Code
37
Duel hs
24288
No
Dueling Network Architectures for Deep Reinforce...
2015-11-20
Code
38
Nature DQN
23270
No
-
-
Code
39
DQN hs
20882
No
Deep Reinforcement Learning with Double Q-learning
2015-09-22
Code
40
Gorila
20620
No
Massively Parallel Methods for Deep Reinforcemen...
2015-07-15
Code
41
Best Learner
19544
No
The Arcade Learning Environment: An Evaluation P...
2012-07-19
Code
42
CURL
14280
No
CURL: Contrastive Unsupervised Representations f...
2020-04-08
Code
43
A3C FF (1 day) hs
3046
No
Asynchronous Methods for Deep Reinforcement Lear...
2016-02-04
Code