TasksSotADatasetsPapersMethodsSubmitAbout
Papers With Code 2

A community resource for machine learning research: papers, code, benchmarks, and state-of-the-art results.

Explore

Notable BenchmarksAll SotADatasetsPapersMethods

Community

Submit ResultsAbout

Data sourced from the PWC Archive (CC-BY-SA 4.0). Built by the community, for the community.

Papers/A Weakly Supervised Learning Framework for Salient Object ...

A Weakly Supervised Learning Framework for Salient Object Detection via Hybrid Labels

Runmin Cong, Qi Qin, Chen Zhang, Qiuping Jiang, Shiqi Wang, Yao Zhao, Sam Kwong

2022-09-07Salient Object Detectionobject-detectionObject DetectionRGB Salient Object DetectionSaliency Detection
PaperPDFCodeCodeCode(official)

Abstract

Fully-supervised salient object detection (SOD) methods have made great progress, but such methods often rely on a large number of pixel-level annotations, which are time-consuming and labour-intensive. In this paper, we focus on a new weakly-supervised SOD task under hybrid labels, where the supervision labels include a large number of coarse labels generated by the traditional unsupervised method and a small number of real labels. To address the issues of label noise and quantity imbalance in this task, we design a new pipeline framework with three sophisticated training strategies. In terms of model framework, we decouple the task into label refinement sub-task and salient object detection sub-task, which cooperate with each other and train alternately. Specifically, the R-Net is designed as a two-stream encoder-decoder model equipped with Blender with Guidance and Aggregation Mechanisms (BGA), aiming to rectify the coarse labels for more reliable pseudo-labels, while the S-Net is a replaceable SOD network supervised by the pseudo labels generated by the current R-Net. Note that, we only need to use the trained S-Net for testing. Moreover, in order to guarantee the effectiveness and efficiency of network training, we design three training strategies, including alternate iteration mechanism, group-wise incremental mechanism, and credibility verification mechanism. Experiments on five SOD benchmarks show that our method achieves competitive performance against weakly-supervised/unsupervised methods both qualitatively and quantitatively.

Results

TaskDatasetMetricValueModel
Object DetectionECSSDF-Score0.899HybridSOD
Object DetectionECSSDMAE0.051HybridSOD
Object DetectionECSSDS-Measure0.886HybridSOD
Object DetectionPASCAL-SF-Score0.827HybridSOD
Object DetectionPASCAL-SMAE0.076HybridSOD
Object DetectionPASCAL-SS-Measure0.828HybridSOD
Object DetectionHKU-ISF-Score0.892HybridSOD
Object DetectionHKU-ISMAE0.038HybridSOD
Object DetectionHKU-ISS-Measure0.887HybridSOD
Object DetectionDUTS-TEMAE0.05HybridSOD
Object DetectionDUTS-TES-Measure0.837HybridSOD
3DECSSDF-Score0.899HybridSOD
3DECSSDMAE0.051HybridSOD
3DECSSDS-Measure0.886HybridSOD
3DPASCAL-SF-Score0.827HybridSOD
3DPASCAL-SMAE0.076HybridSOD
3DPASCAL-SS-Measure0.828HybridSOD
3DHKU-ISF-Score0.892HybridSOD
3DHKU-ISMAE0.038HybridSOD
3DHKU-ISS-Measure0.887HybridSOD
3DDUTS-TEMAE0.05HybridSOD
3DDUTS-TES-Measure0.837HybridSOD
RGB Salient Object DetectionECSSDF-Score0.899HybridSOD
RGB Salient Object DetectionECSSDMAE0.051HybridSOD
RGB Salient Object DetectionECSSDS-Measure0.886HybridSOD
RGB Salient Object DetectionPASCAL-SF-Score0.827HybridSOD
RGB Salient Object DetectionPASCAL-SMAE0.076HybridSOD
RGB Salient Object DetectionPASCAL-SS-Measure0.828HybridSOD
RGB Salient Object DetectionHKU-ISF-Score0.892HybridSOD
RGB Salient Object DetectionHKU-ISMAE0.038HybridSOD
RGB Salient Object DetectionHKU-ISS-Measure0.887HybridSOD
RGB Salient Object DetectionDUTS-TEMAE0.05HybridSOD
RGB Salient Object DetectionDUTS-TES-Measure0.837HybridSOD
2D ClassificationECSSDF-Score0.899HybridSOD
2D ClassificationECSSDMAE0.051HybridSOD
2D ClassificationECSSDS-Measure0.886HybridSOD
2D ClassificationPASCAL-SF-Score0.827HybridSOD
2D ClassificationPASCAL-SMAE0.076HybridSOD
2D ClassificationPASCAL-SS-Measure0.828HybridSOD
2D ClassificationHKU-ISF-Score0.892HybridSOD
2D ClassificationHKU-ISMAE0.038HybridSOD
2D ClassificationHKU-ISS-Measure0.887HybridSOD
2D ClassificationDUTS-TEMAE0.05HybridSOD
2D ClassificationDUTS-TES-Measure0.837HybridSOD
2D Object DetectionECSSDF-Score0.899HybridSOD
2D Object DetectionECSSDMAE0.051HybridSOD
2D Object DetectionECSSDS-Measure0.886HybridSOD
2D Object DetectionPASCAL-SF-Score0.827HybridSOD
2D Object DetectionPASCAL-SMAE0.076HybridSOD
2D Object DetectionPASCAL-SS-Measure0.828HybridSOD
2D Object DetectionHKU-ISF-Score0.892HybridSOD
2D Object DetectionHKU-ISMAE0.038HybridSOD
2D Object DetectionHKU-ISS-Measure0.887HybridSOD
2D Object DetectionDUTS-TEMAE0.05HybridSOD
2D Object DetectionDUTS-TES-Measure0.837HybridSOD
16kECSSDF-Score0.899HybridSOD
16kECSSDMAE0.051HybridSOD
16kECSSDS-Measure0.886HybridSOD
16kPASCAL-SF-Score0.827HybridSOD
16kPASCAL-SMAE0.076HybridSOD
16kPASCAL-SS-Measure0.828HybridSOD
16kHKU-ISF-Score0.892HybridSOD
16kHKU-ISMAE0.038HybridSOD
16kHKU-ISS-Measure0.887HybridSOD
16kDUTS-TEMAE0.05HybridSOD
16kDUTS-TES-Measure0.837HybridSOD

Related Papers

A Real-Time System for Egocentric Hand-Object Interaction Detection in Industrial Domains2025-07-17RS-TinyNet: Stage-wise Feature Fusion Network for Detecting Tiny Objects in Remote Sensing Images2025-07-17Decoupled PROB: Decoupled Query Initialization Tasks and Objectness-Class Learning for Open World Object Detection2025-07-17Dual LiDAR-Based Traffic Movement Count Estimation at a Signalized Intersection: Deployment, Data Collection, and Preliminary Analysis2025-07-17Vision-based Perception for Autonomous Vehicles in Obstacle Avoidance Scenarios2025-07-16Tomato Multi-Angle Multi-Pose Dataset for Fine-Grained Phenotyping2025-07-15ECORE: Energy-Conscious Optimized Routing for Deep Learning Models at the Edge2025-07-08Beyond One Shot, Beyond One Perspective: Cross-View and Long-Horizon Distillation for Better LiDAR Representations2025-07-07