Tasks
SotA
Datasets
Papers
Methods
Submit
About
SotA
/
Playing Games
/
Atari Games
/
Atari 2600 Tennis
Atari Games on Atari 2600 Tennis
Metric: Score (higher is better)
Leaderboard
Dataset
Loading chart...
Results
Submit a result
Export CSV
#
Model
↕
Score
▼
Extra Data
Paper
Date
↕
Code
1
GDI-I3
24
No
Generalized Data Distribution Iteration
2022-06-07
-
2
GDI-I3
24
No
Generalized Data Distribution Iteration
2022-06-07
-
3
GDI-H3
24
No
Generalized Data Distribution Iteration
2022-06-07
-
4
Ape-X
23.9
No
Distributed Prioritized Experience Replay
2018-03-02
Code
5
Agent57
23.84
No
Agent57: Outperforming the Atari Human Benchmark
2020-03-30
Code
6
IQN
23.6
No
Implicit Quantile Networks for Distributional Re...
2018-06-14
Code
7
QR-DQN-1
23.6
No
Distributional Reinforcement Learning with Quant...
2017-10-27
Code
8
C51 noop
23.1
No
A Distributional Perspective on Reinforcement Le...
2017-07-21
Code
9
ASL DDQN
22.3
No
Train a Real-world Local Path Planner in One Hou...
2023-05-07
Code
10
Recurrent Rational DQN Average
20.6
No
Adaptive Rational Activations to Boost Deep Rein...
2021-02-18
Code
11
Rational DQN Average
20.5
No
Adaptive Rational Activations to Boost Deep Rein...
2021-02-18
Code
12
DreamerV2
14
No
Mastering Atari with Discrete World Models
2020-10-05
Code
13
DQN noop
12.2
No
Deep Reinforcement Learning with Double Q-learning
2015-09-22
Code
14
DDQN+Pop-Art noop
12.1
No
Learning values across many orders of magnitude
2016-02-24
-
15
DQN hs
11.1
No
Deep Reinforcement Learning with Double Q-learning
2015-09-22
Code
16
Duel noop
5.1
No
Dueling Network Architectures for Deep Reinforce...
2015-11-20
Code
17
Duel hs
4.4
No
Dueling Network Architectures for Deep Reinforce...
2015-11-20
Code
18
UCT
2.8
No
The Arcade Learning Environment: An Evaluation P...
2012-07-19
Code
19
IMPALA (deep)
0.55
No
IMPALA: Scalable Distributed Deep-RL with Import...
2018-02-05
Code
20
SARSA
0
No
-
-
-
21
Prior noop
0
No
-
-
Code
22
Prior+Duel noop
0
No
-
-
Code
23
Bootstrapped DQN
0
No
-
-
Code
24
MuZero
0
No
-
-
Code
25
CGP
0
No
-
-
Code
26
NoisyNet-Dueling
0
No
-
-
Code
27
Advantage Learning
0
No
-
-
Code
28
MuZero (Res2 Adam)
0
No
-
-
Code
29
R2D2
-0.1
No
-
-
Code
30
Best Learner
-0.1
No
The Arcade Learning Environment: An Evaluation P...
2012-07-19
Code
31
Gorila
-0.7
No
Massively Parallel Methods for Deep Reinforcemen...
2015-07-15
Code
32
Nature DQN
-2.5
No
-
-
Code
33
ES FF (1 hour) noop
-4.5
No
Evolution Strategies as a Scalable Alternative t...
2017-03-10
Code
34
Prior hs
-5.3
No
Prioritized Experience Replay
2015-11-18
Code
35
A3C FF hs
-6.3
No
Asynchronous Methods for Deep Reinforcement Lear...
2016-02-04
Code
36
A3C LSTM hs
-6.4
No
Asynchronous Methods for Deep Reinforcement Lear...
2016-02-04
Code
37
DDQN (tuned) hs
-7.8
No
Deep Reinforcement Learning with Double Q-learning
2015-09-22
Code
38
POP3D
-8.32
No
Policy Optimization With Penalized Point Probabi...
2018-07-02
Code
39
A3C FF (1 day) hs
-10.2
No
Asynchronous Methods for Deep Reinforcement Lear...
2016-02-04
Code
40
DNA
-10.9
No
DNA: Proximal Policy Optimization with a Dual Ne...
2022-06-20
Code
41
Prior+Duel hs
-13.2
No
Deep Reinforcement Learning with Double Q-learning
2015-09-22
Code
42
A2C + SIL
-17.3
No
Self-Imitation Learning
2018-06-14
Code
43
DDQN (tuned) noop
-22.8
No
Dueling Network Architectures for Deep Reinforce...
2015-11-20
Code