Tasks
SotA
Datasets
Papers
Methods
Submit
About
SotA
/
Playing Games
/
Atari Games
/
Atari 2600 Montezuma's Revenge
Atari Games on Atari 2600 Montezuma's Revenge
Metric: Score (higher is better)
Leaderboard
Dataset
Loading chart...
Results
Submit a result
Export CSV
#
Model
↕
Score
▼
Extra Data
Paper
Date
↕
Code
1
Go-Explore
43791
No
First return, then explore
2020-04-27
Code
2
Go-Explore
43763
No
Go-Explore: a New Approach for Hard-Exploration ...
2019-01-30
Code
3
SND-V
21565
No
Self-supervised network distillation: an effecti...
2023-02-22
Code
4
Agent57
9352.01
No
Agent57: Outperforming the Atari Human Benchmark
2020-03-30
Code
5
RND
8152
No
Exploration by Random Network Distillation
2018-10-30
Code
6
SND-VIC
7838
No
Self-supervised network distillation: an effecti...
2023-02-22
Code
7
SND-STD
7212
No
Self-supervised network distillation: an effecti...
2023-02-22
Code
8
A2C+CoEX
6635
No
Contingency-Aware Exploration in Reinforcement L...
2018-11-05
-
9
DQN-PixelCNN
3705.5
No
Count-Based Exploration with Neural Density Models
2017-03-03
Code
10
DDQN-PC
3459
No
Unifying Count-Based Exploration and Intrinsic M...
2016-06-06
Code
11
GDI-I3
3000
No
Generalized Data Distribution Iteration
2022-06-07
-
12
GDI-I3
3000
No
Generalized Data Distribution Iteration
2022-06-07
-
13
Sarsa-φ-EB
2745.4
No
Count-Based Exploration in Feature Space for Rei...
2017-06-25
Code
14
Intrinsic Reward Agent
2504.6
No
Large-Scale Study of Curiosity-Driven Learning
2018-08-13
Code
15
Ape-X
2500
No
Distributed Prioritized Experience Replay
2018-03-02
Code
16
MuZero (Res2 Adam)
2500
No
Online and Offline Reinforcement Learning by Pla...
2021-04-13
Code
17
GDI-H3
2500
No
Generalized Data Distribution Iteration
2022-06-07
-
18
R2D2
2061.3
No
-
-
Code
19
DQN+SR
1778.8
No
Count-Based Exploration with the Successor Repre...
2018-07-31
Code
20
DQNMMCe+SR
1778.6
No
Count-Based Exploration with the Successor Repre...
2018-07-31
Code
21
A2C + SIL
1100
No
Self-Imitation Learning
2018-06-14
Code
22
Sarsa-ε
399.5
No
Count-Based Exploration in Feature Space for Rei...
2017-06-25
Code
23
A3C-CTS
273.7
No
Unifying Count-Based Exploration and Intrinsic M...
2016-06-06
Code
24
SARSA
259
No
-
-
-
25
MP-EB
142
No
Incentivizing Exploration In Reinforcement Learn...
2015-07-03
Code
26
Bootstrapped DQN
100
No
Deep Exploration via Bootstrapped DQN
2016-02-15
Code
27
Gorila
84
No
Massively Parallel Methods for Deep Reinforcemen...
2015-07-15
Code
28
DreamerV2
81
No
Mastering Atari with Discrete World Models
2020-10-05
Code
29
TRPO-hash
75
No
#Exploration: A Study of Count-Based Exploration...
2016-11-15
Code
30
A3C FF hs
67
No
Asynchronous Methods for Deep Reinforcement Lear...
2016-02-04
Code
31
NoisyNet-Dueling
57
No
Noisy Networks for Exploration
2017-06-30
Code
32
A3C FF (1 day) hs
53
No
Asynchronous Methods for Deep Reinforcement Lear...
2016-02-04
Code
33
Prior hs
51
No
Prioritized Experience Replay
2015-11-18
Code
34
DQN hs
47
No
Deep Reinforcement Learning with Double Q-learning
2015-09-22
Code
35
DDQN (tuned) hs
42
No
Deep Reinforcement Learning with Double Q-learning
2015-09-22
Code
36
A3C LSTM hs
41
No
Asynchronous Methods for Deep Reinforcement Lear...
2016-02-04
Code
37
Prior+Duel hs
24
No
Deep Reinforcement Learning with Double Q-learning
2015-09-22
Code
38
Duel hs
22
No
Dueling Network Architectures for Deep Reinforce...
2015-11-20
Code
39
Best Learner
10.7
No
The Arcade Learning Environment: An Evaluation P...
2012-07-19
Code
40
Persistent AL
1.72
No
Increasing the Action Gap: New Operators for Rei...
2015-12-15
Code
41
Advantage Learning
0.42
No
Increasing the Action Gap: New Operators for Rei...
2015-12-15
Code
42
IQN
0
No
-
-
Code
43
MuZero
0
No
-
-
Code
44
IMPALA (deep)
0
No
-
-
Code
45
CGP
0
No
-
-
Code
46
POP3D
0
No
-
-
Code
47
QR-DQN-1
0
No
-
-
Code
48
DNA
0
No
-
-
Code
49
ASL DDQN
0
No
-
-
Code
50
Nature DQN
0
No
-
-
Code