TasksSotADatasetsPapersMethodsSubmitAbout
Papers With Code 2

A community resource for machine learning research: papers, code, benchmarks, and state-of-the-art results.

Explore

Notable BenchmarksAll SotADatasetsPapersMethods

Community

Submit ResultsAbout

Data sourced from the PWC Archive (CC-BY-SA 4.0). Built by the community, for the community.

Papers/RGBD Salient Object Detection via Deep Fusion

RGBD Salient Object Detection via Deep Fusion

Liangqiong Qu, Shengfeng He, Jiawei Zhang, Jiandong Tian, Yandong Tang, Qingxiong Yang

2016-07-12Salient Object DetectionRGB-D Salient Object Detectionobject-detectionObject DetectionRGB Salient Object DetectionSaliency Detection
PaperPDF

Abstract

Numerous efforts have been made to design different low level saliency cues for the RGBD saliency detection, such as color or depth contrast features, background and color compactness priors. However, how these saliency cues interact with each other and how to incorporate these low level saliency cues effectively to generate a master saliency map remain a challenging problem. In this paper, we design a new convolutional neural network (CNN) to fuse different low level saliency cues into hierarchical features for automatically detecting salient objects in RGBD images. In contrast to the existing works that directly feed raw image pixels to the CNN, the proposed method takes advantage of the knowledge in traditional saliency detection by adopting various meaningful and well-designed saliency feature vectors as input. This can guide the training of CNN towards detecting salient object more effectively due to the reduced learning ambiguity. We then integrate a Laplacian propagation framework with the learned CNN to extract a spatially consistent saliency map by exploiting the intrinsic structure of the input image. Extensive quantitative and qualitative experimental evaluations on three datasets demonstrate that the proposed method consistently outperforms state-of-the-art methods.

Results

TaskDatasetMetricValueModel
Object DetectionNJU2KAverage MAE0.205LHM
Object DetectionNJU2KS-Measure51.4LHM
Object DetectionNJU2Kmax E-Measure72.4LHM
Object DetectionNJU2Kmax F-Measure63.2LHM
3DNJU2KAverage MAE0.205LHM
3DNJU2KS-Measure51.4LHM
3DNJU2Kmax E-Measure72.4LHM
3DNJU2Kmax F-Measure63.2LHM
2D ClassificationNJU2KAverage MAE0.205LHM
2D ClassificationNJU2KS-Measure51.4LHM
2D ClassificationNJU2Kmax E-Measure72.4LHM
2D ClassificationNJU2Kmax F-Measure63.2LHM
2D Object DetectionNJU2KAverage MAE0.205LHM
2D Object DetectionNJU2KS-Measure51.4LHM
2D Object DetectionNJU2Kmax E-Measure72.4LHM
2D Object DetectionNJU2Kmax F-Measure63.2LHM
16kNJU2KAverage MAE0.205LHM
16kNJU2KS-Measure51.4LHM
16kNJU2Kmax E-Measure72.4LHM
16kNJU2Kmax F-Measure63.2LHM

Related Papers

A Real-Time System for Egocentric Hand-Object Interaction Detection in Industrial Domains2025-07-17RS-TinyNet: Stage-wise Feature Fusion Network for Detecting Tiny Objects in Remote Sensing Images2025-07-17Decoupled PROB: Decoupled Query Initialization Tasks and Objectness-Class Learning for Open World Object Detection2025-07-17Dual LiDAR-Based Traffic Movement Count Estimation at a Signalized Intersection: Deployment, Data Collection, and Preliminary Analysis2025-07-17Vision-based Perception for Autonomous Vehicles in Obstacle Avoidance Scenarios2025-07-16Tomato Multi-Angle Multi-Pose Dataset for Fine-Grained Phenotyping2025-07-15ECORE: Energy-Conscious Optimized Routing for Deep Learning Models at the Edge2025-07-08Beyond One Shot, Beyond One Perspective: Cross-View and Long-Horizon Distillation for Better LiDAR Representations2025-07-07