TasksSotADatasetsPapersMethodsSubmitAbout
Papers With Code 2

A community resource for machine learning research: papers, code, benchmarks, and state-of-the-art results.

Explore

Notable BenchmarksAll SotADatasetsPapersMethods

Community

Submit ResultsAbout

Data sourced from the PWC Archive (CC-BY-SA 4.0). Built by the community, for the community.

SotA/Playing Games/Atari Games/Atari 2600 Tennis

Atari Games on Atari 2600 Tennis

Metric: Score (higher is better)

LeaderboardDataset
Loading chart...

Results

Submit a result
#Model↕Score▼Extra DataPaperDate↕Code
1GDI-I324NoGeneralized Data Distribution Iteration2022-06-07-
2GDI-I324NoGeneralized Data Distribution Iteration2022-06-07-
3GDI-H324NoGeneralized Data Distribution Iteration2022-06-07-
4Ape-X23.9NoDistributed Prioritized Experience Replay2018-03-02Code
5Agent5723.84NoAgent57: Outperforming the Atari Human Benchmark2020-03-30Code
6IQN23.6NoImplicit Quantile Networks for Distributional Re...2018-06-14Code
7QR-DQN-123.6NoDistributional Reinforcement Learning with Quant...2017-10-27Code
8C51 noop23.1NoA Distributional Perspective on Reinforcement Le...2017-07-21Code
9ASL DDQN22.3NoTrain a Real-world Local Path Planner in One Hou...2023-05-07Code
10Recurrent Rational DQN Average20.6NoAdaptive Rational Activations to Boost Deep Rein...2021-02-18Code
11Rational DQN Average20.5NoAdaptive Rational Activations to Boost Deep Rein...2021-02-18Code
12DreamerV214NoMastering Atari with Discrete World Models2020-10-05Code
13DQN noop12.2NoDeep Reinforcement Learning with Double Q-learning2015-09-22Code
14DDQN+Pop-Art noop12.1NoLearning values across many orders of magnitude2016-02-24-
15DQN hs11.1NoDeep Reinforcement Learning with Double Q-learning2015-09-22Code
16Duel noop5.1NoDueling Network Architectures for Deep Reinforce...2015-11-20Code
17Duel hs4.4NoDueling Network Architectures for Deep Reinforce...2015-11-20Code
18UCT2.8NoThe Arcade Learning Environment: An Evaluation P...2012-07-19Code
19IMPALA (deep)0.55NoIMPALA: Scalable Distributed Deep-RL with Import...2018-02-05Code
20SARSA0No---
21Prior noop0No--Code
22Prior+Duel noop0No--Code
23Bootstrapped DQN0No--Code
24MuZero0No--Code
25CGP0No--Code
26NoisyNet-Dueling0No--Code
27Advantage Learning0No--Code
28MuZero (Res2 Adam)0No--Code
29R2D2-0.1No--Code
30Best Learner-0.1NoThe Arcade Learning Environment: An Evaluation P...2012-07-19Code
31Gorila-0.7NoMassively Parallel Methods for Deep Reinforcemen...2015-07-15Code
32Nature DQN-2.5No--Code
33ES FF (1 hour) noop-4.5NoEvolution Strategies as a Scalable Alternative t...2017-03-10Code
34Prior hs-5.3NoPrioritized Experience Replay2015-11-18Code
35A3C FF hs-6.3NoAsynchronous Methods for Deep Reinforcement Lear...2016-02-04Code
36A3C LSTM hs-6.4NoAsynchronous Methods for Deep Reinforcement Lear...2016-02-04Code
37DDQN (tuned) hs-7.8NoDeep Reinforcement Learning with Double Q-learning2015-09-22Code
38POP3D-8.32NoPolicy Optimization With Penalized Point Probabi...2018-07-02Code
39A3C FF (1 day) hs-10.2NoAsynchronous Methods for Deep Reinforcement Lear...2016-02-04Code
40DNA-10.9NoDNA: Proximal Policy Optimization with a Dual Ne...2022-06-20Code
41Prior+Duel hs-13.2NoDeep Reinforcement Learning with Double Q-learning2015-09-22Code
42A2C + SIL-17.3NoSelf-Imitation Learning2018-06-14Code
43DDQN (tuned) noop-22.8NoDueling Network Architectures for Deep Reinforce...2015-11-20Code