Tasks
SotA
Datasets
Papers
Methods
Submit
About
SotA
/
Playing Games
/
Video Games
/
Atari 2600 River Raid
Video Games on Atari 2600 River Raid
Metric: Score (higher is better)
Leaderboard
Dataset
Loading chart...
Results
Submit a result
Export CSV
#
Model
↕
Score
▼
Extra Data
Paper
Date
↕
Code
1
MuZero
323417.18
No
Mastering Atari, Go, Chess and Shogi by Planning...
2019-11-19
Code
2
MuZero (Res2 Adam)
171673.78
No
Online and Offline Reinforcement Learning by Pla...
2021-04-13
Code
3
Ape-X
63864.4
No
Distributed Prioritized Experience Replay
2018-03-02
Code
4
Agent57
63318.67
No
Agent57: Outperforming the Atari Human Benchmark
2020-03-30
Code
5
R2D2
45632.1
No
-
-
Code
6
IMPALA (deep)
29608.05
No
IMPALA: Scalable Distributed Deep-RL with Import...
2018-02-05
Code
7
GDI-H3
28349
No
Generalized Data Distribution Iteration
2022-06-07
-
8
GDI-I3
28075
No
Generalized Data Distribution Iteration
2022-06-07
-
9
ASL DDQN
24445
No
Train a Real-world Local Path Planner in One Hou...
2023-05-07
Code
10
FQF
23560.7
No
Fully Parameterized Quantile Function for Distri...
2019-11-05
Code
11
Duel noop
21162.6
No
Dueling Network Architectures for Deep Reinforce...
2015-11-20
Code
12
Prior+Duel noop
20607.6
No
Dueling Network Architectures for Deep Reinforce...
2015-11-20
Code
13
IQN
17765
No
Implicit Quantile Networks for Distributional Re...
2018-06-14
Code
14
QR-DQN-1
17571
No
Distributional Reinforcement Learning with Quant...
2017-10-27
Code
15
C51 noop
17322
No
A Distributional Perspective on Reinforcement Le...
2017-07-21
Code
16
DNA
16789
No
DNA: Proximal Policy Optimization with a Dual Ne...
2022-06-20
Code
17
Duel hs
16569.4
No
Dueling Network Architectures for Deep Reinforce...
2015-11-20
Code
18
Prior+Duel hs
16496.8
No
Deep Reinforcement Learning with Double Q-learning
2015-09-22
Code
19
DreamerV2
16351
No
Mastering Atari with Discrete World Models
2020-10-05
Code
20
DDQN (tuned) noop
14884.5
No
Dueling Network Architectures for Deep Reinforce...
2015-11-20
Code
21
Prior noop
14522.3
No
Prioritized Experience Replay
2015-11-18
Code
22
A2C + SIL
14306.1
No
Self-Imitation Learning
2018-06-14
Code
23
Bootstrapped DQN
12845
No
Deep Exploration via Bootstrapped DQN
2016-02-15
Code
24
DDQN+Pop-Art noop
12530.8
No
Learning values across many orders of magnitude
2016-02-24
-
25
A3C FF hs
12201.8
No
Asynchronous Methods for Deep Reinforcement Lear...
2016-02-04
Code
26
Prior hs
11807.2
No
Prioritized Experience Replay
2015-11-18
Code
27
DDQN (tuned) hs
10838.4
No
Deep Reinforcement Learning with Double Q-learning
2015-09-22
Code
28
Advantage Learning
10585.12
No
Increasing the Action Gap: New Operators for Rei...
2015-12-15
Code
29
A3C FF (1 day) hs
10001.2
No
Asynchronous Methods for Deep Reinforcement Lear...
2016-02-04
Code
30
Nature DQN
8316
No
-
-
Code
31
POP3D
8052.23
No
Policy Optimization With Penalized Point Probabi...
2018-07-02
Code
32
DQN noop
7377.6
No
Deep Reinforcement Learning with Double Q-learning
2015-09-22
Code
33
A3C LSTM hs
6591.9
No
Asynchronous Methods for Deep Reinforcement Lear...
2016-02-04
Code
34
Gorila
5310.3
No
Massively Parallel Methods for Deep Reinforcemen...
2015-07-15
Code
35
ES FF (1 hour) noop
5009
No
Evolution Strategies as a Scalable Alternative t...
2017-03-10
Code
36
DQN hs
4748.5
No
Deep Reinforcement Learning with Double Q-learning
2015-09-22
Code
37
UCT
4449
No
The Arcade Learning Environment: An Evaluation P...
2012-07-19
Code
38
MFEC
3868
No
Model-Free Episodic Control with State Aggregation
2020-08-21
-
39
CGP
2914
No
Evolving simple programs for playing Atari games
2018-06-14
Code
40
SARSA
2650
No
-
-
-
41
Best Learner
1904.3
No
The Arcade Learning Environment: An Evaluation P...
2012-07-19
Code