Tasks
SotA
Datasets
Papers
Methods
Submit
About
SotA
/
Playing Games
/
Video Games
/
Atari 2600 Gravitar
Video Games on Atari 2600 Gravitar
Metric: Score (higher is better)
Leaderboard
Dataset
Loading chart...
Results
Submit a result
Export CSV
#
Model
↕
Score
▼
Extra Data
Paper
Date
↕
Code
1
Agent57
19213.96
No
Agent57: Outperforming the Atari Human Benchmark
2020-03-30
Code
2
R2D2
15680.7
No
-
-
Code
3
MuZero (Res2 Adam)
8006.93
No
Online and Offline Reinforcement Learning by Pla...
2021-04-13
Code
4
Go-Explore
7588
No
First return, then explore
2020-04-27
Code
5
SND-VIC
6712
No
Self-supervised network distillation: an effecti...
2023-02-22
Code
6
MuZero
6682.7
No
Mastering Atari, Go, Chess and Shogi by Planning...
2019-11-19
Code
7
GDI-H3
5915
No
Generalized Data Distribution Iteration
2022-06-07
-
8
GDI-I3
5905
No
Generalized Data Distribution Iteration
2022-06-07
-
9
GDI-I3
5905
No
Generalized Data Distribution Iteration
2022-06-07
-
10
SND-STD
4643
No
Self-supervised network distillation: an effecti...
2023-02-22
Code
11
RND
3906
No
Exploration by Random Network Distillation
2018-10-30
Code
12
DreamerV2
3789
No
Mastering Atari with Discrete World Models
2020-10-05
Code
13
UCT
2850
No
The Arcade Learning Environment: An Evaluation P...
2012-07-19
Code
14
SND-V
2741
No
Self-supervised network distillation: an effecti...
2023-02-22
Code
15
CGP
2350
No
Evolving simple programs for playing Atari games
2018-06-14
Code
16
NoisyNet-Dueling
2209
No
Noisy Networks for Exploration
2017-06-30
Code
17
DNA
2190
No
DNA: Proximal Policy Optimization with a Dual Ne...
2022-06-20
Code
18
A2C + SIL
1874.2
No
Self-Imitation Learning
2018-06-14
Code
19
Ape-X
1598.5
No
Distributed Prioritized Experience Replay
2018-03-02
Code
20
FQF
1406
No
Fully Parameterized Quantile Function for Distri...
2019-11-05
Code
21
Intrinsic Reward Agent
1165.1
No
Large-Scale Study of Curiosity-Driven Learning
2018-08-13
Code
22
DQNMMCe
1078.3
No
Count-Based Exploration with the Successor Repre...
2018-07-31
Code
23
QR-DQN-1
995
No
Distributional Reinforcement Learning with Quant...
2017-10-27
Code
24
IQN
911
No
Implicit Quantile Networks for Distributional Re...
2018-06-14
Code
25
ES FF (1 hour) noop
805
No
Evolution Strategies as a Scalable Alternative t...
2017-03-10
Code
26
ASL DDQN
760
No
Train a Real-world Local Path Planner in One Hou...
2023-05-07
Code
27
Duel noop
588
No
Dueling Network Architectures for Deep Reinforce...
2015-11-20
Code
28
POP3D
557.17
No
Policy Optimization With Penalized Point Probabi...
2018-07-02
Code
29
Prior noop
548.5
No
Prioritized Experience Replay
2015-11-18
Code
30
Gorila
538.4
No
Massively Parallel Methods for Deep Reinforcemen...
2015-07-15
Code
31
DQN-PixelCNN
498.3
No
Count-Based Exploration with Neural Density Models
2017-03-03
Code
32
DDQN+Pop-Art noop
483.5
No
Learning values across many orders of magnitude
2016-02-24
-
33
DQN noop
473
No
Deep Reinforcement Learning with Double Q-learning
2015-09-22
Code
34
Persistent AL
446.92
No
Increasing the Action Gap: New Operators for Rei...
2015-12-15
Code
35
C51 noop
440
No
A Distributional Perspective on Reinforcement Le...
2017-07-21
Code
36
SARSA
429
No
-
-
-
37
Advantage Learning
417.65
No
Increasing the Action Gap: New Operators for Rei...
2015-12-15
Code
38
DDQN (tuned) noop
412
No
Dueling Network Architectures for Deep Reinforce...
2015-11-20
Code
39
Best Learner
387.7
No
The Arcade Learning Environment: An Evaluation P...
2012-07-19
Code
40
IMPALA (deep)
359.5
No
IMPALA: Scalable Distributed Deep-RL with Import...
2018-02-05
Code
41
A3C LSTM hs
320
No
Asynchronous Methods for Deep Reinforcement Lear...
2016-02-04
Code
42
Nature DQN
306.7
No
-
-
Code
43
A3C FF hs
303.5
No
Asynchronous Methods for Deep Reinforcement Lear...
2016-02-04
Code
44
DQN hs
298
No
Deep Reinforcement Learning with Double Q-learning
2015-09-22
Code
45
Duel hs
297
No
Dueling Network Architectures for Deep Reinforce...
2015-11-20
Code
46
Bootstrapped DQN
286.1
No
Deep Exploration via Bootstrapped DQN
2016-02-15
Code
47
Prior hs
269.5
No
Prioritized Experience Replay
2015-11-18
Code
48
A3C FF (1 day) hs
269.5
No
Asynchronous Methods for Deep Reinforcement Lear...
2016-02-04
Code
49
A3C-CTS
238.68
No
Unifying Count-Based Exploration and Intrinsic M...
2016-06-06
Code
50
Prior+Duel noop
238
No
Dueling Network Architectures for Deep Reinforce...
2015-11-20
Code
51
DQN-CTS
238
No
Count-Based Exploration with Neural Density Models
2017-03-03
Code
52
DDQN (tuned) hs
200.5
No
Deep Reinforcement Learning with Double Q-learning
2015-09-22
Code
53
Prior+Duel hs
167
No
Deep Reinforcement Learning with Double Q-learning
2015-09-22
Code