TasksSotADatasetsPapersMethodsSubmitAbout
Papers With Code 2

A community resource for machine learning research: papers, code, benchmarks, and state-of-the-art results.

Explore

Notable BenchmarksAll SotADatasetsPapersMethods

Community

Submit ResultsAbout

Data sourced from the PWC Archive (CC-BY-SA 4.0). Built by the community, for the community.

Papers/Revisiting Video Saliency: A Large-scale Benchmark and a N...

Revisiting Video Saliency: A Large-scale Benchmark and a New Model

Wenguan Wang, Jianbing Shen, Fang Guo, Ming-Ming Cheng, Ali Borji

2018-01-23CVPR 2018 6Video Saliency Detection
PaperPDFCode(official)

Abstract

In this work, we contribute to video saliency research in two ways. First, we introduce a new benchmark for predicting human eye movements during dynamic scene free-viewing, which is long-time urged in this field. Our dataset, named DHF1K (Dynamic Human Fixation), consists of 1K high-quality, elaborately selected video sequences spanning a large range of scenes, motions, object types and background complexity. Existing video saliency datasets lack variety and generality of common dynamic scenes and fall short in covering challenging situations in unconstrained environments. In contrast, DHF1K makes a significant leap in terms of scalability, diversity and difficulty, and is expected to boost video saliency modeling. Second, we propose a novel video saliency model that augments the CNN-LSTM network architecture with an attention mechanism to enable fast, end-to-end saliency learning. The attention mechanism explicitly encodes static saliency information, thus allowing LSTM to focus on learning more flexible temporal saliency representation across successive frames. Such a design fully leverages existing large-scale static fixation datasets, avoids overfitting, and significantly improves training efficiency and testing performance. We thoroughly examine the performance of our model, with respect to state-of-the-art saliency models, on three large-scale datasets (i.e., DHF1K, Hollywood2, UCF sports). Experimental results over more than 1.2K testing videos containing 400K frames demonstrate that our model outperforms other competitors.

Results

TaskDatasetMetricValueModel
Saliency DetectionMSU Video Saliency PredictionAUC-J0.839ACLNet
Saliency DetectionMSU Video Saliency PredictionCC0.651ACLNet
Saliency DetectionMSU Video Saliency PredictionFPS4.18ACLNet
Saliency DetectionMSU Video Saliency PredictionKLDiv0.593ACLNet
Saliency DetectionMSU Video Saliency PredictionNSS1.71ACLNet
Saliency DetectionMSU Video Saliency PredictionSIM0.586ACLNet

Related Papers

AIM 2024 Challenge on Video Saliency Prediction: Methods and Results2024-09-23Saliency Detection in Educational Videos: Analyzing the Performance of Current Models, Identifying Limitations and Advancement Directions2024-08-08ViDSOD-100: A New Dataset and a Baseline Model for RGB-D Video Salient Object Detection2024-06-18An Integrated System for Spatio-Temporal Summarization of 360-degrees Videos2023-12-05Panoramic Vision Transformer for Saliency Detection in 360° Videos2022-09-19A Comprehensive Survey on Video Saliency Detection with Auditory Information: the Audio-visual Consistency Perceptual is the Key!2022-06-20GASP: Gated Attention For Saliency Prediction2022-06-09Weakly Supervised Visual-Auditory Fixation Prediction with Multigranularity Perception2021-12-27