Convergence of SARSA with linear function approximation: The random horizon case
Lina Palmborg
2023-06-07reinforcement-learning
Abstract
The reinforcement learning algorithm SARSA combined with linear function approximation has been shown to converge for infinite horizon discounted Markov decision problems (MDPs). In this paper, we investigate the convergence of the algorithm for random horizon MDPs, which has not previously been shown. We show, similar to earlier results for infinite horizon discounted MDPs, that if the behaviour policy is $\varepsilon$-soft and Lipschitz continuous with respect to the weight vector of the linear function approximation, with small enough Lipschitz constant, then the algorithm will converge with probability one when considering a random horizon MDP.
Related Papers
CUDA-L1: Improving CUDA Optimization via Contrastive Reinforcement Learning2025-07-18VisionThink: Smart and Efficient Vision Language Model via Reinforcement Learning2025-07-17Spectral Bellman Method: Unifying Representation and Exploration in RL2025-07-17Aligning Humans and Robots via Reinforcement Learning from Implicit Human Feedback2025-07-17Autonomous Resource Management in Microservice Systems via Reinforcement Learning2025-07-17From Novelty to Imitation: Self-Distilled Rewards for Offline Reinforcement Learning2025-07-17Thought Purity: Defense Paradigm For Chain-of-Thought Attack2025-07-16Distributional Reinforcement Learning on Path-dependent Options2025-07-16