Smooth Attention for Deep Multiple Instance Learning: Application to CT
Intracranial Hemorrhage Detection
- URL: http://arxiv.org/abs/2307.09457v1
- Date: Tue, 18 Jul 2023 17:38:04 GMT
- Title: Smooth Attention for Deep Multiple Instance Learning: Application to CT
Intracranial Hemorrhage Detection
- Authors: Yunan Wu, Francisco M. Castro-Mac\'ias, Pablo Morales-\'Alvarez,
Rafael Molina, Aggelos K. Katsaggelos
- Abstract summary: Multiple Instance Learning (MIL) has been widely applied to medical imaging diagnosis, where bag labels are known and instance labels inside bags are unknown.
In this study, we propose a smooth attention deep MIL (SA-DMIL) model.
Smoothness is achieved by the introduction of first and second order constraints on the latent function encoding the attention paid to each instance in a bag.
- Score: 17.27358760040812
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Multiple Instance Learning (MIL) has been widely applied to medical imaging
diagnosis, where bag labels are known and instance labels inside bags are
unknown. Traditional MIL assumes that instances in each bag are independent
samples from a given distribution. However, instances are often spatially or
sequentially ordered, and one would expect similar diagnostic importance for
neighboring instances. To address this, in this study, we propose a smooth
attention deep MIL (SA-DMIL) model. Smoothness is achieved by the introduction
of first and second order constraints on the latent function encoding the
attention paid to each instance in a bag. The method is applied to the
detection of intracranial hemorrhage (ICH) on head CT scans. The results show
that this novel SA-DMIL: (a) achieves better performance than the non-smooth
attention MIL at both scan (bag) and slice (instance) levels; (b) learns
spatial dependencies between slices; and (c) outperforms current
state-of-the-art MIL methods on the same ICH test set.
Related papers
- Attention Is Not What You Need: Revisiting Multi-Instance Learning for Whole Slide Image Classification [51.95824566163554]
We argue that synergizing the standard MIL assumption with variational inference encourages the model to focus on tumour morphology instead of spurious correlations.
Our method also achieves better classification boundaries for identifying hard instances and mitigates the effect of spurious correlations between bags and labels.
arXiv Detail & Related papers (2024-08-18T12:15:22Z) - Reproducibility in Multiple Instance Learning: A Case For Algorithmic
Unit Tests [59.623267208433255]
Multiple Instance Learning (MIL) is a sub-domain of classification problems with positive and negative labels and a "bag" of inputs.
In this work, we examine five of the most prominent deep-MIL models and find that none of them respects the standard MIL assumption.
We identify and demonstrate this problem via a proposed "algorithmic unit test", where we create synthetic datasets that can be solved by a MIL respecting model.
arXiv Detail & Related papers (2023-10-27T03:05:11Z) - Deep Multiple Instance Learning with Distance-Aware Self-Attention [9.361964965928063]
We introduce a novel multiple instance learning (MIL) model with distance-aware self-attention (DAS-MIL)
Unlike existing relative position representations for self-attention which are discrete, our approach introduces continuous distance-dependent terms into the computation of the attention weights.
We evaluate our model on a custom MNIST-based MIL dataset and on CAMELYON16, a publicly available cancer metastasis detection dataset.
arXiv Detail & Related papers (2023-05-17T20:11:43Z) - Unbiased Multiple Instance Learning for Weakly Supervised Video Anomaly
Detection [74.80595632328094]
Multiple Instance Learning (MIL) is prevailing in Weakly Supervised Video Anomaly Detection (WSVAD)
We propose a new MIL framework: Unbiased MIL (UMIL), to learn unbiased anomaly features that improve WSVAD.
arXiv Detail & Related papers (2023-03-22T08:11:22Z) - Active Learning Enhances Classification of Histopathology Whole Slide
Images with Attention-based Multiple Instance Learning [48.02011627390706]
We train an attention-based MIL and calculate a confidence metric for every image in the dataset to select the most uncertain WSIs for expert annotation.
With a novel attention guiding loss, this leads to an accuracy boost of the trained models with few regions annotated for each class.
It may in the future serve as an important contribution to train MIL models in the clinically relevant context of cancer classification in histopathology.
arXiv Detail & Related papers (2023-03-02T15:18:58Z) - Multiplex-detection Based Multiple Instance Learning Network for Whole
Slide Image Classification [2.61155594652503]
Multiple instance learning (MIL) is a powerful approach to classify whole slide images (WSIs) for diagnostic pathology.
We propose a novel multiplex-detection-based multiple instance learning (MDMIL) to tackle the issues above.
Specifically, MDMIL is constructed by the internal query generation module (IQGM) and the multiplex detection module (MDM)
arXiv Detail & Related papers (2022-08-06T14:36:48Z) - Feature Re-calibration based MIL for Whole Slide Image Classification [7.92885032436243]
Whole slide image (WSI) classification is a fundamental task for the diagnosis and treatment of diseases.
We propose to re-calibrate the distribution of a WSI bag (instances) by using the statistics of the max-instance (critical) feature.
We employ a position encoding module (PEM) to model spatial/morphological information, and perform pooling by multi-head self-attention (PSMA) with a Transformer encoder.
arXiv Detail & Related papers (2022-06-22T07:00:39Z) - Label Cleaning Multiple Instance Learning: Refining Coarse Annotations
on Single Whole-Slide Images [83.7047542725469]
Annotating cancerous regions in whole-slide images (WSIs) of pathology samples plays a critical role in clinical diagnosis, biomedical research, and machine learning algorithms development.
We present a method, named Label Cleaning Multiple Instance Learning (LC-MIL), to refine coarse annotations on a single WSI without the need of external training data.
Our experiments on a heterogeneous WSI set with breast cancer lymph node metastasis, liver cancer, and colorectal cancer samples show that LC-MIL significantly refines the coarse annotations, outperforming the state-of-the-art alternatives, even while learning from a single slide.
arXiv Detail & Related papers (2021-09-22T15:06:06Z) - Sparse Network Inversion for Key Instance Detection in Multiple Instance
Learning [24.66638752977373]
Multiple Instance Learning (MIL) involves predicting a single label for a bag of instances, given positive or negative labels at bag-level.
The attention-based deep MIL model is a recent advance in both bag-level classification and key instance detection.
We present a method to improve the attention-based deep MIL model in the task of KID.
arXiv Detail & Related papers (2020-09-07T07:01:59Z) - Dual-stream Maximum Self-attention Multi-instance Learning [11.685285490589981]
Multi-instance learning (MIL) is a form of weakly supervised learning where a single class label is assigned to a bag of instances while the instance-level labels are not available.
We propose a dual-stream maximum self-attention MIL model (DSMIL) parameterized by neural networks.
Our method achieves superior performance compared to the best MIL methods and demonstrates state-of-the-art performance on benchmark MIL datasets.
arXiv Detail & Related papers (2020-06-09T22:44:58Z) - Weakly-Supervised Action Localization with Expectation-Maximization
Multi-Instance Learning [82.41415008107502]
Weakly-supervised action localization requires training a model to localize the action segments in the video given only video level action label.
It can be solved under the Multiple Instance Learning (MIL) framework, where a bag (video) contains multiple instances (action segments)
We show that our EM-MIL approach more accurately models both the learning objective and the MIL assumptions.
arXiv Detail & Related papers (2020-03-31T23:36:04Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.