TasksSotADatasetsPapersMethodsSubmitAbout
Papers With Code 2

A community resource for machine learning research: papers, code, benchmarks, and state-of-the-art results.

Explore

Notable BenchmarksAll SotADatasetsPapersMethods

Community

Submit ResultsAbout

Data sourced from the PWC Archive (CC-BY-SA 4.0). Built by the community, for the community.

Papers/Grad-CAM++: Improved Visual Explanations for Deep Convolut...

Grad-CAM++: Improved Visual Explanations for Deep Convolutional Networks

Aditya Chattopadhyay, Anirban Sarkar, Prantik Howlader, Vineeth N. Balasubramanian

2017-10-303D Action RecognitionCaption GenerationObject LocalizationError UnderstandingAction RecognitionKnowledge DistillationTemporal Action Localization
PaperPDFCodeCodeCodeCodeCodeCodeCodeCodeCodeCodeCodeCodeCodeCodeCodeCodeCodeCodeCodeCodeCodeCodeCodeCode(official)

Abstract

Over the last decade, Convolutional Neural Network (CNN) models have been highly successful in solving complex vision problems. However, these deep models are perceived as "black box" methods considering the lack of understanding of their internal functioning. There has been a significant recent interest in developing explainable deep learning models, and this paper is an effort in this direction. Building on a recently proposed method called Grad-CAM, we propose a generalized method called Grad-CAM++ that can provide better visual explanations of CNN model predictions, in terms of better object localization as well as explaining occurrences of multiple object instances in a single image, when compared to state-of-the-art. We provide a mathematical derivation for the proposed method, which uses a weighted combination of the positive partial derivatives of the last convolutional layer feature maps with respect to a specific class score as weights to generate a visual explanation for the corresponding class label. Our extensive experiments and evaluations, both subjective and objective, on standard datasets showed that Grad-CAM++ provides promising human-interpretable visual explanations for a given CNN architecture across multiple tasks including classification, image caption generation and 3D action recognition; as well as in new settings such as knowledge distillation.

Results

TaskDatasetMetricValueModel
Error UnderstandingCUB-200-2011 (ResNet-101)Average highest confidence0.2647Grad-CAM++
Error UnderstandingCUB-200-2011 (ResNet-101)Insertion AUC score0.1094Grad-CAM++
Error UnderstandingCUB-200-2011Average highest confidence (EfficientNetV2-M)0.2659Grad-CAM++
Error UnderstandingCUB-200-2011Average highest confidence (MobileNetV2)0.3462Grad-CAM++
Error UnderstandingCUB-200-2011Average highest confidence (ResNet-101)0.2647Grad-CAM++
Error UnderstandingCUB-200-2011Insertion AUC score (EfficientNetV2-M)0.1605Grad-CAM++
Error UnderstandingCUB-200-2011Insertion AUC score (MobileNetV2)0.1284Grad-CAM++
Error UnderstandingCUB-200-2011Insertion AUC score (ResNet-101)0.1094Grad-CAM++

Related Papers

Visual-Language Model Knowledge Distillation Method for Image Quality Assessment2025-07-21A Real-Time System for Egocentric Hand-Object Interaction Detection in Industrial Domains2025-07-17Uncertainty-Aware Cross-Modal Knowledge Distillation with Prototype Learning for Multimodal Brain-Computer Interfaces2025-07-17DVFL-Net: A Lightweight Distilled Video Focal Modulation Network for Spatio-Temporal Action Recognition2025-07-16HanjaBridge: Resolving Semantic Ambiguity in Korean LLMs via Hanja-Augmented Pre-Training2025-07-15Feature Distillation is the Better Choice for Model-Heterogeneous Federated Learning2025-07-14KAT-V1: Kwai-AutoThink Technical Report2025-07-11Towards Collaborative Fairness in Federated Learning Under Imbalanced Covariate Shift2025-07-11