Sparse Network Inversion for Key Instance Detection in Multiple Instance
Learning
- URL: http://arxiv.org/abs/2009.02909v2
- Date: Tue, 8 Sep 2020 01:09:53 GMT
- Title: Sparse Network Inversion for Key Instance Detection in Multiple Instance
Learning
- Authors: Beomjo Shin, Junsu Cho, Hwanjo Yu, Seungjin Choi
- Abstract summary: Multiple Instance Learning (MIL) involves predicting a single label for a bag of instances, given positive or negative labels at bag-level.
The attention-based deep MIL model is a recent advance in both bag-level classification and key instance detection.
We present a method to improve the attention-based deep MIL model in the task of KID.
- Score: 24.66638752977373
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Multiple Instance Learning (MIL) involves predicting a single label for a bag
of instances, given positive or negative labels at bag-level, without accessing
to label for each instance in the training phase. Since a positive bag contains
both positive and negative instances, it is often required to detect positive
instances (key instances) when a set of instances is categorized as a positive
bag. The attention-based deep MIL model is a recent advance in both bag-level
classification and key instance detection (KID). However, if the positive and
negative instances in a positive bag are not clearly distinguishable, the
attention-based deep MIL model has limited KID performance as the attention
scores are skewed to few positive instances. In this paper, we present a method
to improve the attention-based deep MIL model in the task of KID. The main idea
is to use the neural network inversion to find which instances made
contribution to the bag-level prediction produced by the trained MIL model.
Moreover, we incorporate a sparseness constraint into the neural network
inversion, leading to the sparse network inversion which is solved by the
proximal gradient method. Numerical experiments on an MNIST-based image MIL
dataset and two real-world histopathology datasets verify the validity of our
method, demonstrating the KID performance is significantly improved while the
performance of bag-level prediction is maintained.
Related papers
- Attention Is Not What You Need: Revisiting Multi-Instance Learning for Whole Slide Image Classification [51.95824566163554]
We argue that synergizing the standard MIL assumption with variational inference encourages the model to focus on tumour morphology instead of spurious correlations.
Our method also achieves better classification boundaries for identifying hard instances and mitigates the effect of spurious correlations between bags and labels.
arXiv Detail & Related papers (2024-08-18T12:15:22Z) - Task-oriented Embedding Counts: Heuristic Clustering-driven Feature Fine-tuning for Whole Slide Image Classification [1.292108130501585]
We propose a clustering-driven feature fine-tuning method (HC-FT) to enhance the performance of multiple instance learning.
The proposed method is evaluated on both CAMELYON16 and BRACS datasets, achieving an AUC of 97.13% and 85.85%, respectively.
arXiv Detail & Related papers (2024-06-02T08:53:45Z) - Reproducibility in Multiple Instance Learning: A Case For Algorithmic
Unit Tests [59.623267208433255]
Multiple Instance Learning (MIL) is a sub-domain of classification problems with positive and negative labels and a "bag" of inputs.
In this work, we examine five of the most prominent deep-MIL models and find that none of them respects the standard MIL assumption.
We identify and demonstrate this problem via a proposed "algorithmic unit test", where we create synthetic datasets that can be solved by a MIL respecting model.
arXiv Detail & Related papers (2023-10-27T03:05:11Z) - Disambiguated Attention Embedding for Multi-Instance Partial-Label
Learning [68.56193228008466]
In many real-world tasks, the concerned objects can be represented as a multi-instance bag associated with a candidate label set.
Existing MIPL approach follows the instance-space paradigm by assigning augmented candidate label sets of bags to each instance and aggregating bag-level labels from instance-level labels.
We propose an intuitive algorithm named DEMIPL, i.e., Disambiguated attention Embedding for Multi-Instance Partial-Label learning.
arXiv Detail & Related papers (2023-05-26T13:25:17Z) - Feature Re-calibration based MIL for Whole Slide Image Classification [7.92885032436243]
Whole slide image (WSI) classification is a fundamental task for the diagnosis and treatment of diseases.
We propose to re-calibrate the distribution of a WSI bag (instances) by using the statistics of the max-instance (critical) feature.
We employ a position encoding module (PEM) to model spatial/morphological information, and perform pooling by multi-head self-attention (PSMA) with a Transformer encoder.
arXiv Detail & Related papers (2022-06-22T07:00:39Z) - Online progressive instance-balanced sampling for weakly supervised
object detection [0.0]
An online progressive instance-balanced sampling (OPIS) algorithm based on hard sampling and soft sampling is proposed in this paper.
The proposed method can significantly improve the baseline, which is also comparable to many existing state-of-the-art results.
arXiv Detail & Related papers (2022-06-21T12:48:13Z) - Balancing Bias and Variance for Active Weakly Supervised Learning [9.145168943972067]
Modern multiple instance learning (MIL) models achieve competitive performance at the bag level.
However, instance-level prediction, which is essential for many important applications, is unsatisfactory.
We propose a novel deep subset that aims to boost the instance-level prediction.
Experiments conducted over multiple real-world datasets clearly demonstrate the state-of-the-art instance-level prediction.
arXiv Detail & Related papers (2022-06-12T07:15:35Z) - Discovery-and-Selection: Towards Optimal Multiple Instance Learning for
Weakly Supervised Object Detection [86.86602297364826]
We propose a discoveryand-selection approach fused with multiple instance learning (DS-MIL)
Our proposed DS-MIL approach can consistently improve the baselines, reporting state-of-the-art performance.
arXiv Detail & Related papers (2021-10-18T07:06:57Z) - Dual-stream Maximum Self-attention Multi-instance Learning [11.685285490589981]
Multi-instance learning (MIL) is a form of weakly supervised learning where a single class label is assigned to a bag of instances while the instance-level labels are not available.
We propose a dual-stream maximum self-attention MIL model (DSMIL) parameterized by neural networks.
Our method achieves superior performance compared to the best MIL methods and demonstrates state-of-the-art performance on benchmark MIL datasets.
arXiv Detail & Related papers (2020-06-09T22:44:58Z) - Learning from Aggregate Observations [82.44304647051243]
We study the problem of learning from aggregate observations where supervision signals are given to sets of instances.
We present a general probabilistic framework that accommodates a variety of aggregate observations.
Simple maximum likelihood solutions can be applied to various differentiable models.
arXiv Detail & Related papers (2020-04-14T06:18:50Z) - Weakly-Supervised Action Localization with Expectation-Maximization
Multi-Instance Learning [82.41415008107502]
Weakly-supervised action localization requires training a model to localize the action segments in the video given only video level action label.
It can be solved under the Multiple Instance Learning (MIL) framework, where a bag (video) contains multiple instances (action segments)
We show that our EM-MIL approach more accurately models both the learning objective and the MIL assumptions.
arXiv Detail & Related papers (2020-03-31T23:36:04Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.