TasksSotADatasetsPapersMethodsSubmitAbout
Papers With Code 2

A community resource for machine learning research: papers, code, benchmarks, and state-of-the-art results.

Explore

Notable BenchmarksAll SotADatasetsPapersMethods

Community

Submit ResultsAbout

Data sourced from the PWC Archive (CC-BY-SA 4.0). Built by the community, for the community.

Papers/Semantic-Aware Representation Blending for Multi-Label Ima...

Semantic-Aware Representation Blending for Multi-Label Image Recognition with Partial Labels

Tao Pu, Tianshui Chen, Hefeng Wu, Liang Lin

2022-03-04Multi-label Image Recognition with Partial Labels
PaperPDFCode(official)

Abstract

Training the multi-label image recognition models with partial labels, in which merely some labels are known while others are unknown for each image, is a considerably challenging and practical task. To address this task, current algorithms mainly depend on pre-training classification or similarity models to generate pseudo labels for the unknown labels. However, these algorithms depend on sufficient multi-label annotations to train the models, leading to poor performance especially with low known label proportion. In this work, we propose to blend category-specific representation across different images to transfer information of known labels to complement unknown labels, which can get rid of pre-training models and thus does not depend on sufficient annotations. To this end, we design a unified semantic-aware representation blending (SARB) framework that exploits instance-level and prototype-level semantic representation to complement unknown labels by two complementary modules: 1) an instance-level representation blending (ILRB) module blends the representations of the known labels in an image to the representations of the unknown labels in another image to complement these unknown labels. 2) a prototype-level representation blending (PLRB) module learns more stable representation prototypes for each category and blends the representation of unknown labels with the prototypes of corresponding labels to complement these labels. Extensive experiments on the MS-COCO, Visual Genome, Pascal VOC 2007 datasets show that the proposed SARB framework obtains superior performance over current leading competitors on all known label proportion settings, i.e., with the mAP improvement of 4.6%, 4.%, 2.2% on these three datasets when the known label proportion is 10%. Codes are available at https://github.com/HCPLab-SYSU/HCP-MLR-PL.

Results

TaskDatasetMetricValueModel
Multi-Label Image ClassificationMS-COCO-2014Average mAP77.9SARB
Multi-Label Image ClassificationPASCAL VOC 2007Average mAP90.7SARB
Multi-Label Image ClassificationVisual GenomeAverage mAP45.6SARB
Image ClassificationMS-COCO-2014Average mAP77.9SARB
Image ClassificationPASCAL VOC 2007Average mAP90.7SARB
Image ClassificationVisual GenomeAverage mAP45.6SARB
2D ClassificationMS-COCO-2014Average mAP77.9SARB
2D ClassificationPASCAL VOC 2007Average mAP90.7SARB
2D ClassificationVisual GenomeAverage mAP45.6SARB

Related Papers

Saliency Regularization for Self-Training with Partial Annotations2023-01-01Texts as Images in Prompt Tuning for Multi-Label Image Recognition2022-11-23DualCoOp: Fast Adaptation to Multi-Label Recognition with Limited Annotations2022-06-20Dual-Perspective Semantic-Aware Representation Blending for Multi-Label Image Recognition with Partial Labels2022-05-26Heterogeneous Semantic Transfer for Multi-label Recognition with Partial Labels2022-05-23Structured Semantic Transfer for Multi-Label Recognition with Partial Labels2021-12-21Learning Graph Convolutional Networks for Multi-Label Recognition and Applications2021-03-03Knowledge-Guided Multi-Label Few-Shot Learning for General Image Recognition2020-09-20