Description
Generative Adversarial Imitation Learning presents a new general framework for directly extracting a policy from data, as if it were obtained by reinforcement learning following inverse reinforcement learning.
Papers Using This Method
Imitation Learning of Correlated Policies in Stackelberg Games2025-03-11Quality Diversity Imitation Learning2024-10-08Enhancing Spectrum Efficiency in 6G Satellite Networks: A GAIL-Powered Policy Learning via Asynchronous Federated Inverse Reinforcement Learning2024-09-27Adversarial Safety-Critical Scenario Generation using Naturalistic Human Driving Priors2024-08-06RaCIL: Ray Tracing based Multi-UAV Obstacle Avoidance through Composite Imitation Learning2024-06-24Diffusion-Reward Adversarial Imitation Learning2024-05-25C-GAIL: Stabilizing Generative Adversarial Imitation Learning with Control Theory2024-02-26Exploring Gradient Explosion in Generative Adversarial Imitation Learning: A Probabilistic Perspective2023-12-18Hierarchical Generative Adversarial Imitation Learning with Mid-level Input Generation for Autonomous Driving on Urban Environments2023-02-09Latent Policies for Adversarial Imitation Learning2022-06-22Diverse Imitation Learning via Self-Organizing Generative Models2022-05-06GAIL-PT: A Generic Intelligent Penetration Testing Framework with Generative Adversarial Imitation Learning2022-04-05Rethinking ValueDice: Does It Really Improve Performance?2022-02-05Rethinking ValueDice: Does It Really Improve Performance?2022-01-17Continuous Control with Action Quantization from Demonstrations2021-10-19Generative Adversarial Imitation Learning for End-to-End Autonomous Driving on Urban Environments2021-10-16Diverse Imitation Learning via Self-OrganizingGenerative Models2021-09-29Stabilized Likelihood-based Imitation Learning via Denoising Continuous Normalizing Flow2021-09-29Provably Efficient Generative Adversarial Imitation Learning for Online and Offline Setting with Linear Function Approximation2021-08-19A Pragmatic Look at Deep Imitation Learning2021-08-04