Non-I.I.D. Multi-Instance Learning for Predicting Instance and Bag
Labels using Variational Auto-Encoder
- URL: http://arxiv.org/abs/2105.01276v1
- Date: Tue, 4 May 2021 03:50:33 GMT
- Title: Non-I.I.D. Multi-Instance Learning for Predicting Instance and Bag
Labels using Variational Auto-Encoder
- Authors: Weijia Zhang
- Abstract summary: We propose the Multi-Instance Variational Auto-Encoder (MIVAE) algorithm which explicitly models the dependencies among the instances for predicting both bag labels and instance labels.
Experimental results on several multi-instance benchmarks and end-to-end medical imaging datasets demonstrate that MIVAE performs better than state-of-the-art algorithms for both label and bag label prediction tasks.
- Score: 1.52292571922932
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Multi-instance learning is a type of weakly supervised learning. It deals
with tasks where the data is a set of bags and each bag is a set of instances.
Only the bag labels are observed whereas the labels for the instances are
unknown. An important advantage of multi-instance learning is that by
representing objects as a bag of instances, it is able to preserve the inherent
dependencies among parts of the objects. Unfortunately, most existing
algorithms assume all instances to be \textit{identically and independently
distributed}, which violates real-world scenarios since the instances within a
bag are rarely independent. In this work, we propose the Multi-Instance
Variational Auto-Encoder (MIVAE) algorithm which explicitly models the
dependencies among the instances for predicting both bag labels and instance
labels. Experimental results on several multi-instance benchmarks and
end-to-end medical imaging datasets demonstrate that MIVAE performs better than
state-of-the-art algorithms for both instance label and bag label prediction
tasks.
Related papers
- Instance Consistency Regularization for Semi-Supervised 3D Instance Segmentation [50.51125319374404]
We propose a novel self-training network InsTeacher3D to explore and exploit pure instance knowledge from unlabeled data.
Experimental results on multiple large-scale datasets show that the InsTeacher3D significantly outperforms prior state-of-the-art semi-supervised approaches.
arXiv Detail & Related papers (2024-06-24T16:35:58Z) - Disambiguated Attention Embedding for Multi-Instance Partial-Label
Learning [68.56193228008466]
In many real-world tasks, the concerned objects can be represented as a multi-instance bag associated with a candidate label set.
Existing MIPL approach follows the instance-space paradigm by assigning augmented candidate label sets of bags to each instance and aggregating bag-level labels from instance-level labels.
We propose an intuitive algorithm named DEMIPL, i.e., Disambiguated attention Embedding for Multi-Instance Partial-Label learning.
arXiv Detail & Related papers (2023-05-26T13:25:17Z) - Multi-Instance Partial-Label Learning: Towards Exploiting Dual Inexact
Supervision [53.530957567507365]
In some real-world tasks, each training sample is associated with a candidate label set that contains one ground-truth label and some false positive labels.
In this paper, we formalize such problems as multi-instance partial-label learning (MIPL)
Existing multi-instance learning algorithms and partial-label learning algorithms are suboptimal for solving MIPL problems.
arXiv Detail & Related papers (2022-12-18T03:28:51Z) - Trustable Co-label Learning from Multiple Noisy Annotators [68.59187658490804]
Supervised deep learning depends on massive accurately annotated examples.
A typical alternative is learning from multiple noisy annotators.
This paper proposes a data-efficient approach, called emphTrustable Co-label Learning (TCL)
arXiv Detail & Related papers (2022-03-08T16:57:00Z) - Nested Multiple Instance Learning with Attention Mechanisms [2.6552823781152366]
Multiple instance learning (MIL) is a type of weakly supervised learning where multiple instances of data with unknown labels are sorted into bags.
We propose Nested MIL, where only the outermost bag is labelled and instances are represented as latent labels.
Our proposed model provides high accuracy performance as well as spotting relevant instances on image regions.
arXiv Detail & Related papers (2021-11-01T13:41:09Z) - Fast learning from label proportions with small bags [0.0]
In learning from label proportions (LLP), the instances are grouped into bags, and the task is to learn an instance classifier given relative class proportions in training bags.
In this work, we focus on the case of small bags, which allows designing more efficient algorithms by explicitly considering all consistent label combinations.
arXiv Detail & Related papers (2021-10-07T13:11:18Z) - Active Learning in Incomplete Label Multiple Instance Multiple Label
Learning [17.5720245903743]
We propose a novel bag-class pair based approach for active learning in the MIML setting.
Our approach is based on a discriminative graphical model with efficient and exact inference.
arXiv Detail & Related papers (2021-07-22T17:01:28Z) - How to trust unlabeled data? Instance Credibility Inference for Few-Shot
Learning [47.21354101796544]
This paper presents a statistical approach, dubbed Instance Credibility Inference (ICI) to exploit the support of unlabeled instances for few-shot visual recognition.
We rank the credibility of pseudo-labeled instances along the regularization path of their corresponding incidental parameters, and the most trustworthy pseudo-labeled examples are preserved as the augmented labeled instances.
arXiv Detail & Related papers (2020-07-15T03:38:09Z) - Kernel Self-Attention in Deep Multiple Instance Learning [11.57630563212961]
We introduce Self-Attention Attention-based MIL Pooling (SA-AbMILP) aggregation operation to account for the dependencies between instances.
We conduct several experiments on MNIST, histological, microbiological, and retinal databases to show that SA-AbMILP performs better than other models.
arXiv Detail & Related papers (2020-05-25T14:59:13Z) - Weakly-Supervised Action Localization with Expectation-Maximization
Multi-Instance Learning [82.41415008107502]
Weakly-supervised action localization requires training a model to localize the action segments in the video given only video level action label.
It can be solved under the Multiple Instance Learning (MIL) framework, where a bag (video) contains multiple instances (action segments)
We show that our EM-MIL approach more accurately models both the learning objective and the MIL assumptions.
arXiv Detail & Related papers (2020-03-31T23:36:04Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.