TasksSotADatasetsPapersMethodsSubmitAbout
Papers With Code 2

A community resource for machine learning research: papers, code, benchmarks, and state-of-the-art results.

Explore

Notable BenchmarksAll SotADatasetsPapersMethods

Community

Submit ResultsAbout

Data sourced from the PWC Archive (CC-BY-SA 4.0). Built by the community, for the community.

Methods/Accumulating Eligibility Trace

Accumulating Eligibility Trace

Reinforcement LearningIntroduced 200015 papers

Description

An Accumulating Eligibility Trace is a type of eligibility trace where the trace increments in an accumulative way. For the memory vector e_t∈Rb≥0\textbf{e}\_{t} \in \mathbb{R}^{b} \geq \textbf{0}e_t∈Rb≥0:

e_0=0\mathbf{e\_{0}} = \textbf{0}e_0=0

e_t=∇v^(S_t,θ_t)+γλe_t\textbf{e}\_{t} = \nabla{\hat{v}}\left(S\_{t}, \mathbf{\theta}\_{t}\right) + \gamma\lambda\textbf{e}\_{t}e_t=∇v^(S_t,θ_t)+γλe_t

Papers Using This Method

On-line Policy Improvement using Monte-Carlo Search2025-01-09Model Predictive Control and Reinforcement Learning: A Unified Framework Based on Dynamic Programming2024-06-02Large Language Models Play StarCraft II: Benchmarks and A Chain of Summarization Approach2023-12-19A Robust and Opponent-Aware League Training Method for StarCraft II2023-09-21AlphaStar Unplugged: Large-Scale Offline Reinforcement Learning2023-08-07On Efficient Reinforcement Learning for Full-length Game of StarCraft II2022-09-23AI in Human-computer Gaming: Techniques, Challenges and Opportunities2021-11-15Search in Imperfect Information Games2021-11-10Rethinking of AlphaStar2021-08-07An Introduction of mini-AlphaStar2021-04-14Deep Reinforcement Learning with Function Properties in Mean Reversion Strategies2021-01-09TStarBot-X: An Open-Sourced and Comprehensive Study for Efficient League Training in StarCraft II Full Game2020-11-27Forensic Similarity for Digital Images2019-02-13AlphaStar: An Evolutionary Computation Perspective2019-02-05A Hierarchical Reinforcement Learning Method for Persistent Time-Sensitive Tasks2016-06-20