TasksSotADatasetsPapersMethodsSubmitAbout
Papers With Code 2

A community resource for machine learning research: papers, code, benchmarks, and state-of-the-art results.

Explore

Notable BenchmarksAll SotADatasetsPapersMethods

Community

Submit ResultsAbout

Data sourced from the PWC Archive (CC-BY-SA 4.0). Built by the community, for the community.

Papers/Convolutional Tensor-Train LSTM for Spatio-temporal Learning

Convolutional Tensor-Train LSTM for Spatio-temporal Learning

Jiahao Su, Wonmin Byeon, Jean Kossaifi, Furong Huang, Jan Kautz, Animashree Anandkumar

2020-02-21NeurIPS 2020 12Video CompressionVideo PredictionActivity Recognition
PaperPDFCodeCode(official)

Abstract

Learning from spatio-temporal data has numerous applications such as human-behavior analysis, object tracking, video compression, and physics simulation.However, existing methods still perform poorly on challenging video tasks such as long-term forecasting. This is because these kinds of challenging tasks require learning long-term spatio-temporal correlations in the video sequence. In this paper, we propose a higher-order convolutional LSTM model that can efficiently learn these correlations, along with a succinct representations of the history. This is accomplished through a novel tensor train module that performs prediction by combining convolutional features across time. To make this feasible in terms of computation and memory requirements, we propose a novel convolutional tensor-train decomposition of the higher-order model. This decomposition reduces the model complexity by jointly approximating a sequence of convolutional kernels asa low-rank tensor-train factorization. As a result, our model outperforms existing approaches, but uses only a fraction of parameters, including the baseline models.Our results achieve state-of-the-art performance in a wide range of applications and datasets, including the multi-steps video prediction on the Moving-MNIST-2and KTH action datasets as well as early activity recognition on the Something-Something V2 dataset.

Results

TaskDatasetMetricValueModel
VideoKTHCond10Conv-TT-LSTM
VideoKTHLPIPS0.196Conv-TT-LSTM
VideoKTHPSNR27.62Conv-TT-LSTM
VideoKTHPred20Conv-TT-LSTM
VideoKTHSSIM0.815Conv-TT-LSTM
Video PredictionKTHCond10Conv-TT-LSTM
Video PredictionKTHLPIPS0.196Conv-TT-LSTM
Video PredictionKTHPSNR27.62Conv-TT-LSTM
Video PredictionKTHPred20Conv-TT-LSTM
Video PredictionKTHSSIM0.815Conv-TT-LSTM

Related Papers

ZKP-FedEval: Verifiable and Privacy-Preserving Federated Evaluation using Zero-Knowledge Proofs2025-07-15GSVR: 2D Gaussian-based Video Representation for 800+ FPS with Hybrid Deformation Field2025-07-08Epona: Autoregressive Diffusion World Model for Autonomous Driving2025-06-30Whole-Body Conditioned Egocentric Video Prediction2025-06-26SEZ-HARN: Self-Explainable Zero-shot Human Activity Recognition Network2025-06-25Video Compression for Spatiotemporal Earth System Data2025-06-24MinD: Unified Visual Imagination and Control via Hierarchical World Models2025-06-23MSNeRV: Neural Video Representation with Multi-Scale Feature Fusion2025-06-18