TasksSotADatasetsPapersMethodsSubmitAbout
Papers With Code 2

A community resource for machine learning research: papers, code, benchmarks, and state-of-the-art results.

Explore

Notable BenchmarksAll SotADatasetsPapersMethods

Community

Submit ResultsAbout

Data sourced from the PWC Archive (CC-BY-SA 4.0). Built by the community, for the community.

Papers/Less is More: Fewer Interpretable Region via Submodular Su...

Less is More: Fewer Interpretable Region via Submodular Subset Selection

Ruoyu Chen, Hua Zhang, Siyuan Liang, Jingzhi Li, Xiaochun Cao

2024-02-14Interpretability Techniques for Deep LearningError UnderstandingImage Attribution
PaperPDFCode(official)

Abstract

Image attribution algorithms aim to identify important regions that are highly relevant to model decisions. Although existing attribution solutions can effectively assign importance to target elements, they still face the following challenges: 1) existing attribution methods generate inaccurate small regions thus misleading the direction of correct attribution, and 2) the model cannot produce good attribution results for samples with wrong predictions. To address the above challenges, this paper re-models the above image attribution problem as a submodular subset selection problem, aiming to enhance model interpretability using fewer regions. To address the lack of attention to local regions, we construct a novel submodular function to discover more accurate small interpretation regions. To enhance the attribution effect for all samples, we also impose four different constraints on the selection of sub-regions, i.e., confidence, effectiveness, consistency, and collaboration scores, to assess the importance of various subsets. Moreover, our theoretical analysis substantiates that the proposed function is in fact submodular. Extensive experiments show that the proposed method outperforms SOTA methods on two face datasets (Celeb-A and VGG-Face2) and one fine-grained dataset (CUB-200-2011). For correctly predicted samples, the proposed method improves the Deletion and Insertion scores with an average of 4.9% and 2.5% gain relative to HSIC-Attribution. For incorrectly predicted samples, our method achieves gains of 81.0% and 18.4% compared to the HSIC-Attribution algorithm in the average highest confidence and Insertion score respectively. The code is released at https://github.com/RuoyuChen10/SMDL-Attribution.

Results

TaskDatasetMetricValueModel
Image AttributionVGGFace2Deletion AUC score (ArcFace ResNet-101)0.1304SMDL-Attribution (ICLR version)
Image AttributionVGGFace2Insertion AUC score (ArcFace ResNet-101)0.6705SMDL-Attribution (ICLR version)
Image AttributionCUB-200-2011Deletion AUC score (ResNet-101)0.0613SMDL-Attribution (ICLR version)
Image AttributionCUB-200-2011Insertion AUC score (ResNet-101)0.7262SMDL-Attribution (ICLR version)
Image AttributionCelebADeletion AUC score (ArcFace ResNet-101)0.1054SMDL-Attribution (ICLR version)
Image AttributionCelebAInsertion AUC score (ArcFace ResNet-101)0.5752SMDL-Attribution (ICLR version)
Error UnderstandingCUB-200-2011Average highest confidence (EfficientNetV2-M)0.3306SMDL-Attribution (ICLR version)
Error UnderstandingCUB-200-2011Average highest confidence (MobileNetV2)0.5367SMDL-Attribution (ICLR version)
Error UnderstandingCUB-200-2011Average highest confidence (ResNet-101)0.4513SMDL-Attribution (ICLR version)
Error UnderstandingCUB-200-2011Insertion AUC score (EfficientNetV2-M)0.1748SMDL-Attribution (ICLR version)
Error UnderstandingCUB-200-2011Insertion AUC score (MobileNetV2)0.1922SMDL-Attribution (ICLR version)
Error UnderstandingCUB-200-2011Insertion AUC score (ResNet-101)0.1772SMDL-Attribution (ICLR version)

Related Papers

Sampling Matters in Explanations: Towards Trustworthy Attribution Analysis Building Block in Visual Models through Maximizing Explanation Certainty2025-06-24Time series saliency maps: explaining models across multiple domains2025-05-19WILD: a new in-the-Wild Image Linkage Dataset for synthetic image attribution2025-04-28LoRAX: LoRA eXpandable Networks for Continual Synthetic Image Attribution2025-04-10Dissecting and Mitigating Diffusion Bias via Mechanistic Interpretability2025-03-26IBO: Inpainting-Based Occlusion to Enhance Explainable Artificial Intelligence Evaluation in Histopathology2024-08-29Are handcrafted filters helpful for attributing AI-generated images?2024-07-19I2AM: Interpreting Image-to-Image Latent Diffusion Models via Attribution Maps2024-07-17