Certainty Pooling for Multiple Instance Learning
- URL: http://arxiv.org/abs/2008.10548v1
- Date: Mon, 24 Aug 2020 16:38:46 GMT
- Title: Certainty Pooling for Multiple Instance Learning
- Authors: Jacob Gildenblat, Ido Ben-Shaul, Zvi Lapp, and Eldad Klaiman
- Abstract summary: We present a novel pooling operator called textbfCertainty Pooling which incorporates the model certainty into bag predictions.
Our method outperforms other methods in both bag level and instance level prediction, especially when only small training sets are available.
- Score: 0.6299766708197883
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Multiple Instance Learning is a form of weakly supervised learning in which
the data is arranged in sets of instances called bags with one label assigned
per bag. The bag level class prediction is derived from the multiple instances
through application of a permutation invariant pooling operator on instance
predictions or embeddings. We present a novel pooling operator called
\textbf{Certainty Pooling} which incorporates the model certainty into bag
predictions resulting in a more robust and explainable model. We compare our
proposed method with other pooling operators in controlled experiments with low
evidence ratio bags based on MNIST, as well as on a real life histopathology
dataset - Camelyon16. Our method outperforms other methods in both bag level
and instance level prediction, especially when only small training sets are
available. We discuss the rationale behind our approach and the reasons for its
superiority for these types of datasets.
Related papers
- Learning from Label Proportions and Covariate-shifted Instances [12.066922664696445]
In learning from label proportions (LLP) the aggregate label is the average of the instance-labels in a bag.
We develop methods for hybrid LLP which naturally incorporate the target bag-labels along with the source instance-labels.
arXiv Detail & Related papers (2024-11-19T08:36:34Z) - A Fixed-Point Approach to Unified Prompt-Based Counting [51.20608895374113]
This paper aims to establish a comprehensive prompt-based counting framework capable of generating density maps for objects indicated by various prompt types, such as box, point, and text.
Our model excels in prominent class-agnostic datasets and exhibits superior performance in cross-dataset adaptation tasks.
arXiv Detail & Related papers (2024-03-15T12:05:44Z) - Probabilistic Contrastive Learning for Long-Tailed Visual Recognition [78.70453964041718]
Longtailed distributions frequently emerge in real-world data, where a large number of minority categories contain a limited number of samples.
Recent investigations have revealed that supervised contrastive learning exhibits promising potential in alleviating the data imbalance.
We propose a novel probabilistic contrastive (ProCo) learning algorithm that estimates the data distribution of the samples from each class in the feature space.
arXiv Detail & Related papers (2024-03-11T13:44:49Z) - Rethinking Multiple Instance Learning for Whole Slide Image Classification: A Good Instance Classifier is All You Need [18.832471712088353]
We propose an instance-level weakly supervised contrastive learning algorithm for the first time under the MIL setting.
We also propose an accurate pseudo label generation method through prototype learning.
arXiv Detail & Related papers (2023-07-05T12:44:52Z) - Learning from Aggregated Data: Curated Bags versus Random Bags [35.394402088653415]
We explore the possibility of training machine learning models with aggregated data labels, rather than individual labels.
For the curated bag setting, we show that we can perform gradient-based learning without any degradation in performance.
In the random bag setting, there is a trade-off between size of the bag and the achievable error rate as our bound indicates.
arXiv Detail & Related papers (2023-05-16T15:53:45Z) - Deep Active Learning with Contrastive Learning Under Realistic Data Pool
Assumptions [2.578242050187029]
Active learning aims to identify the most informative data from an unlabeled data pool that enables a model to reach the desired accuracy rapidly.
Most existing active learning methods have been evaluated in an ideal setting where only samples relevant to the target task exist in an unlabeled data pool.
We introduce new active learning benchmarks that include ambiguous, task-irrelevant out-of-distribution as well as in-distribution samples.
arXiv Detail & Related papers (2023-03-25T10:46:10Z) - An Additive Instance-Wise Approach to Multi-class Model Interpretation [53.87578024052922]
Interpretable machine learning offers insights into what factors drive a certain prediction of a black-box system.
Existing methods mainly focus on selecting explanatory input features, which follow either locally additive or instance-wise approaches.
This work exploits the strengths of both methods and proposes a global framework for learning local explanations simultaneously for multiple target classes.
arXiv Detail & Related papers (2022-07-07T06:50:27Z) - Fast learning from label proportions with small bags [0.0]
In learning from label proportions (LLP), the instances are grouped into bags, and the task is to learn an instance classifier given relative class proportions in training bags.
In this work, we focus on the case of small bags, which allows designing more efficient algorithms by explicitly considering all consistent label combinations.
arXiv Detail & Related papers (2021-10-07T13:11:18Z) - An Empirical Comparison of Instance Attribution Methods for NLP [62.63504976810927]
We evaluate the degree to which different potential instance attribution agree with respect to the importance of training samples.
We find that simple retrieval methods yield training instances that differ from those identified via gradient-based methods.
arXiv Detail & Related papers (2021-04-09T01:03:17Z) - Few-Shot Learning with Intra-Class Knowledge Transfer [100.87659529592223]
We consider the few-shot classification task with an unbalanced dataset.
Recent works have proposed to solve this task by augmenting the training data of the few-shot classes using generative models.
We propose to leverage the intra-class knowledge from the neighbor many-shot classes with the intuition that neighbor classes share similar statistical information.
arXiv Detail & Related papers (2020-08-22T18:15:38Z) - UniT: Unified Knowledge Transfer for Any-shot Object Detection and
Segmentation [52.487469544343305]
Methods for object detection and segmentation rely on large scale instance-level annotations for training.
We propose an intuitive and unified semi-supervised model that is applicable to a range of supervision.
arXiv Detail & Related papers (2020-06-12T22:45:47Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.