TasksSotADatasetsPapersMethodsSubmitAbout
Papers With Code 2

A community resource for machine learning research: papers, code, benchmarks, and state-of-the-art results.

Explore

Notable BenchmarksAll SotADatasetsPapersMethods

Community

Submit ResultsAbout

Data sourced from the PWC Archive (CC-BY-SA 4.0). Built by the community, for the community.

Papers/Action-Agnostic Human Pose Forecasting

Action-Agnostic Human Pose Forecasting

Hsu-kuang Chiu, Ehsan Adeli, Borui Wang, De-An Huang, Juan Carlos Niebles

2018-10-23Human Pose ForecastingHuman Dynamics
PaperPDFCode(official)

Abstract

Predicting and forecasting human dynamics is a very interesting but challenging task with several prospective applications in robotics, health-care, etc. Recently, several methods have been developed for human pose forecasting; however, they often introduce a number of limitations in their settings. For instance, previous work either focused only on short-term or long-term predictions, while sacrificing one or the other. Furthermore, they included the activity labels as part of the training process, and require them at testing time. These limitations confine the usage of pose forecasting models for real-world applications, as often there are no activity-related annotations for testing scenarios. In this paper, we propose a new action-agnostic method for short- and long-term human pose forecasting. To this end, we propose a new recurrent neural network for modeling the hierarchical and multi-scale characteristics of the human dynamics, denoted by triangular-prism RNN (TP-RNN). Our model captures the latent hierarchical structure embedded in temporal human pose sequences by encoding the temporal dependencies with different time-scales. For evaluation, we run an extensive set of experiments on Human 3.6M and Penn Action datasets and show that our method outperforms baseline and state-of-the-art methods quantitatively and qualitatively. Codes are available at https://github.com/eddyhkchiu/pose_forecast_wacv/

Results

TaskDatasetMetricValueModel
Pose EstimationHuman3.6MMAR, walking, 1,000ms0.77TP-RNN
Pose EstimationHuman3.6MMAR, walking, 400ms0.65TP-RNN
3DHuman3.6MMAR, walking, 1,000ms0.77TP-RNN
3DHuman3.6MMAR, walking, 400ms0.65TP-RNN
1 Image, 2*2 StitchiHuman3.6MMAR, walking, 1,000ms0.77TP-RNN
1 Image, 2*2 StitchiHuman3.6MMAR, walking, 400ms0.65TP-RNN

Related Papers

LLMs are Introvert2025-07-08DTRT: Enhancing Human Intent Estimation and Role Allocation for Physical Human-Robot Collaboration2025-05-23Progressive Inertial Poser: Progressive Real-Time Kinematic Chain Estimation for 3D Full-Body Pose from Three IMU Sensors2025-05-08HA-VLN: A Benchmark for Human-Aware Navigation in Discrete-Continuous Environments with Dynamic Multi-Human Interactions, Real-World Validation, and an Open Leaderboard2025-03-18A Survey on Human Interaction Motion Generation2025-03-17MotionMap: Representing Multimodality in Human Pose Forecasting2024-12-25A model for the dynamics of COVID-19 infection transmission in human with latent delay2024-12-16Homogeneous Dynamics Space for Heterogeneous Humans2024-12-09