TasksSotADatasetsPapersMethodsSubmitAbout
Papers With Code 2

A community resource for machine learning research: papers, code, benchmarks, and state-of-the-art results.

Explore

Notable BenchmarksAll SotADatasetsPapersMethods

Community

Submit ResultsAbout

Data sourced from the PWC Archive (CC-BY-SA 4.0). Built by the community, for the community.

Papers/Reconciling Spatial and Temporal Abstractions for Goal Rep...

Reconciling Spatial and Temporal Abstractions for Goal Representation

Mehdi Zadem, Sergio Mover, Sao Mai Nguyen

2024-01-18Hierarchical Reinforcement LearningContinuous Control
PaperPDFCode(official)

Abstract

Goal representation affects the performance of Hierarchical Reinforcement Learning (HRL) algorithms by decomposing the complex learning problem into easier subtasks. Recent studies show that representations that preserve temporally abstract environment dynamics are successful in solving difficult problems and provide theoretical guarantees for optimality. These methods however cannot scale to tasks where environment dynamics increase in complexity i.e. the temporally abstract transition relations depend on larger number of variables. On the other hand, other efforts have tried to use spatial abstraction to mitigate the previous issues. Their limitations include scalability to high dimensional environments and dependency on prior knowledge. In this paper, we propose a novel three-layer HRL algorithm that introduces, at different levels of the hierarchy, both a spatial and a temporal goal abstraction. We provide a theoretical study of the regret bounds of the learned policies. We evaluate the approach on complex continuous control tasks, demonstrating the effectiveness of spatial and temporal abstractions learned by this approach. Find open-source code at https://github.com/cosynus-lix/STAR.

Results

TaskDatasetMetricValueModel
Hierarchical Reinforcement LearningAnt + MazeReturn0.85STAR

Related Papers

Supervised Fine Tuning on Curated Data is Reinforcement Learning (and can be improved)2025-07-17Strict Subgoal Execution: Reliable Long-Horizon Planning in Hierarchical Reinforcement Learning2025-06-26rQdia: Regularizing Q-Value Distributions With Image Augmentation2025-06-26Hierarchical Reinforcement Learning and Value Optimization for Challenging Quadruped Locomotion2025-06-24Tailored Conversations beyond LLMs: A RL-Based Dialogue Manager2025-06-24Sparse-Reg: Improving Sample Complexity in Offline Reinforcement Learning using Sparsity2025-06-20Fractional Reasoning via Latent Steering Vectors Improves Inference Time Compute2025-06-18HiLight: A Hierarchical Reinforcement Learning Framework with Global Adversarial Guidance for Large-Scale Traffic Signal Control2025-06-17