TasksSotADatasetsPapersMethodsSubmitAbout
Papers With Code 2

A community resource for machine learning research: papers, code, benchmarks, and state-of-the-art results.

Explore

Notable BenchmarksAll SotADatasetsPapersMethods

Community

Submit ResultsAbout

Data sourced from the PWC Archive (CC-BY-SA 4.0). Built by the community, for the community.

Papers/Learning Trajectory Dependencies for Human Motion Prediction

Learning Trajectory Dependencies for Human Motion Prediction

Wei Mao, Miaomiao Liu, Mathieu Salzmann, Hongdong Li

2019-08-15ICCV 2019 10Human Pose ForecastingHuman motion predictionmotion predictionMulti-Person Pose forecastingPrediction
PaperPDFCodeCode(official)CodeCodeCode

Abstract

Human motion prediction, i.e., forecasting future body poses given observed pose sequence, has typically been tackled with recurrent neural networks (RNNs). However, as evidenced by prior work, the resulted RNN models suffer from prediction errors accumulation, leading to undesired discontinuities in motion prediction. In this paper, we propose a simple feed-forward deep network for motion prediction, which takes into account both temporal smoothness and spatial dependencies among human body joints. In this context, we then propose to encode temporal information by working in trajectory space, instead of the traditionally-used pose space. This alleviates us from manually defining the range of temporal dependencies (or temporal convolutional filter size, as done in previous work). Moreover, spatial dependency of human pose is encoded by treating a human pose as a generic graph (rather than a human skeletal kinematic tree) formed by links between every pair of body joints. Instead of using a pre-defined graph structure, we design a new graph convolutional network to learn graph connectivity automatically. This allows the network to capture long range dependencies beyond that of human kinematic tree. We evaluate our approach on several standard benchmark datasets for motion prediction, including Human3.6M, the CMU motion capture dataset and 3DPW. Our experiments clearly demonstrate that the proposed approach achieves state of the art performance, and is applicable to both angle-based and position-based pose representations. The code is available at https://github.com/wei-mao-2019/LearnTrajDep

Results

TaskDatasetMetricValueModel
Autonomous VehiclesExpi - common actions splitAverage MPJPE (mm) @ 1000 ms303LTD
Autonomous VehiclesExpi - common actions splitAverage MPJPE (mm) @ 200 ms90LTD
Autonomous VehiclesExpi - common actions splitAverage MPJPE (mm) @ 400 ms169LTD
Autonomous VehiclesExpi - common actions splitAverage MPJPE (mm) @ 600 ms226LTD
Autonomous VehiclesExpi - unseen actions splitAverage MPJPE (mm) @ 400 ms177LTD
Autonomous VehiclesExpi - unseen actions splitAverage MPJPE (mm) @ 600 ms233LTD
Autonomous VehiclesExpi - unseen actions splitAverage MPJPE (mm) @ 800 ms272LTD
Pose EstimationHuman3.6MAverage MPJPE (mm) @ 1000 ms113LTD-GCN
Pose EstimationHuman3.6MAverage MPJPE (mm) @ 400ms63.5LTD-GCN
Pose EstimationHuman3.6MMAR, walking, 1,000ms0.67LTD-GCN
Pose EstimationHuman3.6MMAR, walking, 400ms0.56LTD-GCN
Motion ForecastingExpi - common actions splitAverage MPJPE (mm) @ 1000 ms303LTD
Motion ForecastingExpi - common actions splitAverage MPJPE (mm) @ 200 ms90LTD
Motion ForecastingExpi - common actions splitAverage MPJPE (mm) @ 400 ms169LTD
Motion ForecastingExpi - common actions splitAverage MPJPE (mm) @ 600 ms226LTD
Motion ForecastingExpi - unseen actions splitAverage MPJPE (mm) @ 400 ms177LTD
Motion ForecastingExpi - unseen actions splitAverage MPJPE (mm) @ 600 ms233LTD
Motion ForecastingExpi - unseen actions splitAverage MPJPE (mm) @ 800 ms272LTD
3DHuman3.6MAverage MPJPE (mm) @ 1000 ms113LTD-GCN
3DHuman3.6MAverage MPJPE (mm) @ 400ms63.5LTD-GCN
3DHuman3.6MMAR, walking, 1,000ms0.67LTD-GCN
3DHuman3.6MMAR, walking, 400ms0.56LTD-GCN
Autonomous DrivingExpi - common actions splitAverage MPJPE (mm) @ 1000 ms303LTD
Autonomous DrivingExpi - common actions splitAverage MPJPE (mm) @ 200 ms90LTD
Autonomous DrivingExpi - common actions splitAverage MPJPE (mm) @ 400 ms169LTD
Autonomous DrivingExpi - common actions splitAverage MPJPE (mm) @ 600 ms226LTD
Autonomous DrivingExpi - unseen actions splitAverage MPJPE (mm) @ 400 ms177LTD
Autonomous DrivingExpi - unseen actions splitAverage MPJPE (mm) @ 600 ms233LTD
Autonomous DrivingExpi - unseen actions splitAverage MPJPE (mm) @ 800 ms272LTD
1 Image, 2*2 StitchiHuman3.6MAverage MPJPE (mm) @ 1000 ms113LTD-GCN
1 Image, 2*2 StitchiHuman3.6MAverage MPJPE (mm) @ 400ms63.5LTD-GCN
1 Image, 2*2 StitchiHuman3.6MMAR, walking, 1,000ms0.67LTD-GCN
1 Image, 2*2 StitchiHuman3.6MMAR, walking, 400ms0.56LTD-GCN

Related Papers

Multi-Strategy Improved Snake Optimizer Accelerated CNN-LSTM-Attention-Adaboost for Trajectory Prediction2025-07-21Generative Click-through Rate Prediction with Applications to Search Advertising2025-07-15Conformation-Aware Structure Prediction of Antigen-Recognizing Immune Proteins2025-07-11Foundation models for time series forecasting: Application in conformal prediction2025-07-09Predicting Graph Structure via Adapted Flux Balance Analysis2025-07-08Speech Quality Assessment Model Based on Mixture of Experts: System-Level Performance Enhancement and Utterance-Level Challenge Analysis2025-07-08A Wireless Foundation Model for Multi-Task Prediction2025-07-08High Order Collaboration-Oriented Federated Graph Neural Network for Accurate QoS Prediction2025-07-07