TasksSotADatasetsPapersMethodsSubmitAbout
Papers With Code 2

A community resource for machine learning research: papers, code, benchmarks, and state-of-the-art results.

Explore

Notable BenchmarksAll SotADatasetsPapersMethods

Community

Submit ResultsAbout

Data sourced from the PWC Archive (CC-BY-SA 4.0). Built by the community, for the community.

Papers/Video Saliency Prediction Using Enhanced Spatiotemporal Al...

Video Saliency Prediction Using Enhanced Spatiotemporal Alignment Network

Jin Chen, Huihui Song, Kaihua Zhang, Bo Liu, Qingshan Liu

2020-01-02Video Saliency PredictionSaliency PredictionPredictionVideo Saliency Detection
PaperPDFCode(official)

Abstract

Due to a variety of motions across different frames, it is highly challenging to learn an effective spatiotemporal representation for accurate video saliency prediction (VSP). To address this issue, we develop an effective spatiotemporal feature alignment network tailored to VSP, mainly including two key sub-networks: a multi-scale deformable convolutional alignment network (MDAN) and a bidirectional convolutional Long Short-Term Memory (Bi-ConvLSTM) network. The MDAN learns to align the features of the neighboring frames to the reference one in a coarse-to-fine manner, which can well handle various motions. Specifically, the MDAN owns a pyramidal feature hierarchy structure that first leverages deformable convolution (Dconv) to align the lower-resolution features across frames, and then aggregates the aligned features to align the higher-resolution features, progressively enhancing the features from top to bottom. The output of MDAN is then fed into the Bi-ConvLSTM for further enhancement, which captures the useful long-time temporal information along forward and backward timing directions to effectively guide attention orientation shift prediction under complex scene transformation. Finally, the enhanced features are decoded to generate the predicted saliency map. The proposed model is trained end-to-end without any intricate post processing. Extensive evaluations on four VSP benchmark datasets demonstrate that the proposed method achieves favorable performance against state-of-the-art methods. The source codes and all the results will be released.

Results

TaskDatasetMetricValueModel
Saliency DetectionMSU Video Saliency PredictionAUC-J0.841STRA-Net
Saliency DetectionMSU Video Saliency PredictionCC0.665STRA-Net
Saliency DetectionMSU Video Saliency PredictionFPS3.35STRA-Net
Saliency DetectionMSU Video Saliency PredictionKLDiv0.583STRA-Net
Saliency DetectionMSU Video Saliency PredictionNSS1.81STRA-Net
Saliency DetectionMSU Video Saliency PredictionSIM0.591STRA-Net

Related Papers

Multi-Strategy Improved Snake Optimizer Accelerated CNN-LSTM-Attention-Adaboost for Trajectory Prediction2025-07-21Generative Click-through Rate Prediction with Applications to Search Advertising2025-07-15Conformation-Aware Structure Prediction of Antigen-Recognizing Immune Proteins2025-07-11Foundation models for time series forecasting: Application in conformal prediction2025-07-09Predicting Graph Structure via Adapted Flux Balance Analysis2025-07-08Speech Quality Assessment Model Based on Mixture of Experts: System-Level Performance Enhancement and Utterance-Level Challenge Analysis2025-07-08A Wireless Foundation Model for Multi-Task Prediction2025-07-08High Order Collaboration-Oriented Federated Graph Neural Network for Accurate QoS Prediction2025-07-07