TasksSotADatasetsPapersMethodsSubmitAbout
Papers With Code 2

A community resource for machine learning research: papers, code, benchmarks, and state-of-the-art results.

Explore

Notable BenchmarksAll SotADatasetsPapersMethods

Community

Submit ResultsAbout

Data sourced from the PWC Archive (CC-BY-SA 4.0). Built by the community, for the community.

Papers/Learning Latent Dynamics for Planning from Pixels

Learning Latent Dynamics for Planning from Pixels

Danijar Hafner, Timothy Lillicrap, Ian Fischer, Ruben Villegas, David Ha, Honglak Lee, James Davidson

2018-11-12Motion PlanningContinuous ControlVariational Inference
PaperPDFCodeCodeCodeCodeCodeCode(official)CodeCodeCode

Abstract

Planning has been very successful for control tasks with known environment dynamics. To leverage planning in unknown environments, the agent needs to learn the dynamics from interactions with the world. However, learning dynamics models that are accurate enough for planning has been a long-standing challenge, especially in image-based domains. We propose the Deep Planning Network (PlaNet), a purely model-based agent that learns the environment dynamics from images and chooses actions through fast online planning in latent space. To achieve high performance, the dynamics model must accurately predict the rewards ahead for multiple time steps. We approach this using a latent dynamics model with both deterministic and stochastic transition components. Moreover, we propose a multi-step variational inference objective that we name latent overshooting. Using only pixel observations, our agent solves continuous control tasks with contact dynamics, partial observability, and sparse rewards, which exceed the difficulty of tasks that were previously solved by planning with learned models. PlaNet uses substantially fewer episodes and reaches final performance close to and sometimes higher than strong model-free algorithms.

Results

TaskDatasetMetricValueModel
Continuous ControlDeepMind Walker Walk (Images)Return890PlaNet
Continuous ControlDeepMind Cup Catch (Images)Return914PlaNet
3DDeepMind Walker Walk (Images)Return890PlaNet
3DDeepMind Cup Catch (Images)Return914PlaNet
3D Face ModellingDeepMind Walker Walk (Images)Return890PlaNet
3D Face ModellingDeepMind Cup Catch (Images)Return914PlaNet

Related Papers

Supervised Fine Tuning on Curated Data is Reinforcement Learning (and can be improved)2025-07-17Vision-based Perception for Autonomous Vehicles in Obstacle Avoidance Scenarios2025-07-16Interpretable Bayesian Tensor Network Kernel Machines with Automatic Rank and Feature Selection2025-07-15Epona: Autoregressive Diffusion World Model for Autonomous Driving2025-06-30rQdia: Regularizing Q-Value Distributions With Image Augmentation2025-06-26Scalable Bayesian Low-Rank Adaptation of Large Language Models via Stochastic Variational Subspace Inference2025-06-26Ark: An Open-source Python-based Framework for Robot Learning2025-06-24Drive-R1: Bridging Reasoning and Planning in VLMs for Autonomous Driving with Reinforcement Learning2025-06-23