TasksSotADatasetsPapersMethodsSubmitAbout
Papers With Code 2

A community resource for machine learning research: papers, code, benchmarks, and state-of-the-art results.

Explore

Notable BenchmarksAll SotADatasetsPapersMethods

Community

Submit ResultsAbout

Data sourced from the PWC Archive (CC-BY-SA 4.0). Built by the community, for the community.

Papers/Memory In Memory: A Predictive Neural Network for Learning...

Memory In Memory: A Predictive Neural Network for Learning Higher-Order Non-Stationarity from Spatiotemporal Dynamics

Yunbo Wang, Jianjin Zhang, Hongyu Zhu, Mingsheng Long, Jian-Min Wang, Philip S. Yu

2018-11-19CVPR 2019 6Video PredictionTime Series ForecastingPrecipitation ForecastingTime Series Analysis
PaperPDFCodeCode(official)CodeCode

Abstract

Natural spatiotemporal processes can be highly non-stationary in many ways, e.g. the low-level non-stationarity such as spatial correlations or temporal dependencies of local pixel values; and the high-level variations such as the accumulation, deformation or dissipation of radar echoes in precipitation forecasting. From Cramer's Decomposition, any non-stationary process can be decomposed into deterministic, time-variant polynomials, plus a zero-mean stochastic term. By applying differencing operations appropriately, we may turn time-variant polynomials into a constant, making the deterministic component predictable. However, most previous recurrent neural networks for spatiotemporal prediction do not use the differential signals effectively, and their relatively simple state transition functions prevent them from learning too complicated variations in spacetime. We propose the Memory In Memory (MIM) networks and corresponding recurrent blocks for this purpose. The MIM blocks exploit the differential signals between adjacent recurrent states to model the non-stationary and approximately stationary properties in spatiotemporal dynamics with two cascaded, self-renewed memory modules. By stacking multiple MIM blocks, we could potentially handle higher-order non-stationarity. The MIM networks achieve the state-of-the-art results on four spatiotemporal prediction tasks across both synthetic and real-world datasets. We believe that the general idea of this work can be potentially applied to other time-series forecasting tasks.

Results

TaskDatasetMetricValueModel
VideoMoving MNISTMAE101.1MIM*
VideoMoving MNISTMSE44.2MIM*
VideoMoving MNISTSSIM0.91MIM*
VideoMoving MNISTMAE116.5MIM
VideoMoving MNISTMSE52MIM
VideoMoving MNISTSSIM0.874MIM
VideoHuman3.6MMAE1782.8MIM
VideoHuman3.6MMSE429.9MIM
VideoHuman3.6MSSIM0.79MIM
Video PredictionMoving MNISTMAE101.1MIM*
Video PredictionMoving MNISTMSE44.2MIM*
Video PredictionMoving MNISTSSIM0.91MIM*
Video PredictionMoving MNISTMAE116.5MIM
Video PredictionMoving MNISTMSE52MIM
Video PredictionMoving MNISTSSIM0.874MIM
Video PredictionHuman3.6MMAE1782.8MIM
Video PredictionHuman3.6MMSE429.9MIM
Video PredictionHuman3.6MSSIM0.79MIM

Related Papers

The Power of Architecture: Deep Dive into Transformer Architectures for Long-Term Time Series Forecasting2025-07-17Emergence of Functionally Differentiated Structures via Mutual Information Optimization in Recurrent Neural Networks2025-07-17Data Augmentation in Time Series Forecasting through Inverted Framework2025-07-15MoFE-Time: Mixture of Frequency Domain Experts for Time-Series Forecasting Models2025-07-09Foundation models for time series forecasting: Application in conformal prediction2025-07-09Bridging the Last Mile of Prediction: Enhancing Time Series Forecasting with Conditional Guided Flow Matching2025-07-09Decomposing the Time Series Forecasting Pipeline: A Modular Approach for Time Series Representation, Information Extraction, and Projection2025-07-08AI-Based Demand Forecasting and Load Balancing for Optimising Energy use in Healthcare Systems: A real case study2025-07-08