TasksSotADatasetsPapersMethodsSubmitAbout
Papers With Code 2

A community resource for machine learning research: papers, code, benchmarks, and state-of-the-art results.

Explore

Notable BenchmarksAll SotADatasetsPapersMethods

Community

Submit ResultsAbout

Data sourced from the PWC Archive (CC-BY-SA 4.0). Built by the community, for the community.

Papers/DTFD-MIL: Double-Tier Feature Distillation Multiple Instan...

DTFD-MIL: Double-Tier Feature Distillation Multiple Instance Learning for Histopathology Whole Slide Image Classification

Hongrun Zhang, Yanda Meng, Yitian Zhao, Yihong Qiao, Xiaoyun Yang, Sarah E. Coupland, Yalin Zheng

2022-03-22CVPR 2022 1Image Classificationwhole slide imagesMultiple Instance Learning
PaperPDFCodeCode(official)

Abstract

Multiple instance learning (MIL) has been increasingly used in the classification of histopathology whole slide images (WSIs). However, MIL approaches for this specific classification problem still face unique challenges, particularly those related to small sample cohorts. In these, there are limited number of WSI slides (bags), while the resolution of a single WSI is huge, which leads to a large number of patches (instances) cropped from this slide. To address this issue, we propose to virtually enlarge the number of bags by introducing the concept of pseudo-bags, on which a double-tier MIL framework is built to effectively use the intrinsic features. Besides, we also contribute to deriving the instance probability under the framework of attention-based MIL, and utilize the derivation to help construct and analyze the proposed framework. The proposed method outperforms other latest methods on the CAMELYON-16 by substantially large margins, and is also better in performance on the TCGA lung cancer dataset. The proposed framework is ready to be extended for wider MIL applications. The code is available at: https://github.com/hrzhang1123/DTFD-MIL

Results

TaskDatasetMetricValueModel
Multiple Instance LearningTCGAACC0.951DTFD-MIL (AFS)
Multiple Instance LearningTCGAACC0.891DTFD-MIL (MaxMinS)
Multiple Instance LearningTCGAAUC0.961DTFD-MIL (MaxMinS)
Multiple Instance LearningTCGAACC0.868DTFD-MIL (MaxS)
Multiple Instance LearningTCGAAUC0.919DTFD-MIL (MaxS)
Multiple Instance LearningTCGAAUC0.955DTFD-MIL (MAS)
Multiple Instance LearningCAMELYON16ACC0.908DTFD-MIL (AFS)
Multiple Instance LearningCAMELYON16AUC0.946DTFD-MIL (AFS)
Multiple Instance LearningCAMELYON16ACC0.897DTFD-MIL (MAS)
Multiple Instance LearningCAMELYON16AUC0.945DTFD-MIL (MAS)
Multiple Instance LearningCAMELYON16ACC0.899DTFD-MIL (MaxMinS)
Multiple Instance LearningCAMELYON16AUC0.941DTFD-MIL (MaxMinS)
Multiple Instance LearningCAMELYON16ACC0.864DTFD-MIL (MaxS)
Multiple Instance LearningCAMELYON16AUC0.907DTFD-MIL (MaxS)

Related Papers

Automatic Classification and Segmentation of Tunnel Cracks Based on Deep Learning and Visual Explanations2025-07-18Adversarial attacks to image classification systems using evolutionary algorithms2025-07-17Efficient Adaptation of Pre-trained Vision Transformer underpinned by Approximately Orthogonal Fine-Tuning Strategy2025-07-17Federated Learning for Commercial Image Sources2025-07-17MUPAX: Multidimensional Problem Agnostic eXplainable AI2025-07-17Hashed Watermark as a Filter: Defeating Forging and Overwriting Attacks in Weight-based Neural Network Watermarking2025-07-15Transferring Styles for Reduced Texture Bias and Improved Robustness in Semantic Segmentation Networks2025-07-14FedGSCA: Medical Federated Learning with Global Sample Selector and Client Adaptive Adjuster under Label Noise2025-07-13