Kernel Self-Attention in Deep Multiple Instance Learning
- URL: http://arxiv.org/abs/2005.12991v2
- Date: Fri, 5 Mar 2021 12:36:50 GMT
- Title: Kernel Self-Attention in Deep Multiple Instance Learning
- Authors: Dawid Rymarczyk and Adriana Borowa and Jacek Tabor and Bartosz
Zieli\'nski
- Abstract summary: We introduce Self-Attention Attention-based MIL Pooling (SA-AbMILP) aggregation operation to account for the dependencies between instances.
We conduct several experiments on MNIST, histological, microbiological, and retinal databases to show that SA-AbMILP performs better than other models.
- Score: 11.57630563212961
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Not all supervised learning problems are described by a pair of a fixed-size
input tensor and a label. In some cases, especially in medical image analysis,
a label corresponds to a bag of instances (e.g. image patches), and to classify
such bag, aggregation of information from all of the instances is needed. There
have been several attempts to create a model working with a bag of instances,
however, they are assuming that there are no dependencies within the bag and
the label is connected to at least one instance. In this work, we introduce
Self-Attention Attention-based MIL Pooling (SA-AbMILP) aggregation operation to
account for the dependencies between instances. We conduct several experiments
on MNIST, histological, microbiological, and retinal databases to show that
SA-AbMILP performs better than other models. Additionally, we investigate
kernel variations of Self-Attention and their influence on the results.
Related papers
- Sm: enhanced localization in Multiple Instance Learning for medical imaging classification [11.727293641333713]
Multiple Instance Learning (MIL) is widely used in medical imaging classification to reduce the labeling effort.
We propose a novel, principled, and flexible mechanism to model local dependencies.
Our module leads to state-of-the-art performance in localization while being competitive or superior in classification.
arXiv Detail & Related papers (2024-10-04T09:49:28Z) - A General Model for Aggregating Annotations Across Simple, Complex, and
Multi-Object Annotation Tasks [51.14185612418977]
A strategy to improve label quality is to ask multiple annotators to label the same item and aggregate their labels.
While a variety of bespoke models have been proposed for specific tasks, our work is the first to introduce aggregation methods that generalize across many diverse complex tasks.
This article extends our prior work with investigation of three new research questions.
arXiv Detail & Related papers (2023-12-20T21:28:35Z) - Disambiguated Attention Embedding for Multi-Instance Partial-Label
Learning [68.56193228008466]
In many real-world tasks, the concerned objects can be represented as a multi-instance bag associated with a candidate label set.
Existing MIPL approach follows the instance-space paradigm by assigning augmented candidate label sets of bags to each instance and aggregating bag-level labels from instance-level labels.
We propose an intuitive algorithm named DEMIPL, i.e., Disambiguated attention Embedding for Multi-Instance Partial-Label learning.
arXiv Detail & Related papers (2023-05-26T13:25:17Z) - Leveraging Instance Features for Label Aggregation in Programmatic Weak
Supervision [75.1860418333995]
Programmatic Weak Supervision (PWS) has emerged as a widespread paradigm to synthesize training labels efficiently.
The core component of PWS is the label model, which infers true labels by aggregating the outputs of multiple noisy supervision sources as labeling functions.
Existing statistical label models typically rely only on the outputs of LF, ignoring the instance features when modeling the underlying generative process.
arXiv Detail & Related papers (2022-10-06T07:28:53Z) - Model Agnostic Interpretability for Multiple Instance Learning [7.412445894287708]
In Multiple Instance Learning (MIL), models are trained using bags of instances, where only a single label is provided for each bag.
In this work, we establish the key requirements for interpreting MIL models.
We then go on to develop several model-agnostic approaches that meet these requirements.
arXiv Detail & Related papers (2022-01-27T17:55:32Z) - Nested Multiple Instance Learning with Attention Mechanisms [2.6552823781152366]
Multiple instance learning (MIL) is a type of weakly supervised learning where multiple instances of data with unknown labels are sorted into bags.
We propose Nested MIL, where only the outermost bag is labelled and instances are represented as latent labels.
Our proposed model provides high accuracy performance as well as spotting relevant instances on image regions.
arXiv Detail & Related papers (2021-11-01T13:41:09Z) - Accounting for Dependencies in Deep Learning Based Multiple Instance
Learning for Whole Slide Imaging [8.712556146101953]
Multiple instance learning (MIL) is a key algorithm for classification of whole slide images (WSI)
Histology WSIs can have billions of pixels, which create enormous computational and annotation challenges.
We propose an instance-wise loss function based on instance pseudo-labels.
arXiv Detail & Related papers (2021-11-01T06:50:33Z) - Non-I.I.D. Multi-Instance Learning for Predicting Instance and Bag
Labels using Variational Auto-Encoder [1.52292571922932]
We propose the Multi-Instance Variational Auto-Encoder (MIVAE) algorithm which explicitly models the dependencies among the instances for predicting both bag labels and instance labels.
Experimental results on several multi-instance benchmarks and end-to-end medical imaging datasets demonstrate that MIVAE performs better than state-of-the-art algorithms for both label and bag label prediction tasks.
arXiv Detail & Related papers (2021-05-04T03:50:33Z) - Unsupervised Feature Learning by Cross-Level Instance-Group
Discrimination [68.83098015578874]
We integrate between-instance similarity into contrastive learning, not directly by instance grouping, but by cross-level discrimination.
CLD effectively brings unsupervised learning closer to natural data and real-world applications.
New state-of-the-art on self-supervision, semi-supervision, and transfer learning benchmarks, and beats MoCo v2 and SimCLR on every reported performance.
arXiv Detail & Related papers (2020-08-09T21:13:13Z) - Memory-Augmented Relation Network for Few-Shot Learning [114.47866281436829]
In this work, we investigate a new metric-learning method, Memory-Augmented Relation Network (MRN)
In MRN, we choose the samples that are visually similar from the working context, and perform weighted information propagation to attentively aggregate helpful information from chosen ones to enhance its representation.
We empirically demonstrate that MRN yields significant improvement over its ancestor and achieves competitive or even better performance when compared with other few-shot learning approaches.
arXiv Detail & Related papers (2020-05-09T10:09:13Z) - Weakly-Supervised Action Localization with Expectation-Maximization
Multi-Instance Learning [82.41415008107502]
Weakly-supervised action localization requires training a model to localize the action segments in the video given only video level action label.
It can be solved under the Multiple Instance Learning (MIL) framework, where a bag (video) contains multiple instances (action segments)
We show that our EM-MIL approach more accurately models both the learning objective and the MIL assumptions.
arXiv Detail & Related papers (2020-03-31T23:36:04Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.