Multiple Instance Learning via Iterative Self-Paced Supervised
Contrastive Learning
- URL: http://arxiv.org/abs/2210.09452v2
- Date: Tue, 11 Jul 2023 18:46:45 GMT
- Title: Multiple Instance Learning via Iterative Self-Paced Supervised
Contrastive Learning
- Authors: Kangning Liu, Weicheng Zhu, Yiqiu Shen, Sheng Liu, Narges Razavian,
Krzysztof J. Geras, Carlos Fernandez-Granda
- Abstract summary: Learning representations for individual instances when only bag-level labels are available is a challenge in multiple instance learning (MIL)
We propose a novel framework, Iterative Self-paced Supervised Contrastive Learning for MIL Representations (ItS2CLR)
It improves the learned representation by exploiting instance-level pseudo labels derived from the bag-level labels.
- Score: 22.07044031105496
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Learning representations for individual instances when only bag-level labels
are available is a fundamental challenge in multiple instance learning (MIL).
Recent works have shown promising results using contrastive self-supervised
learning (CSSL), which learns to push apart representations corresponding to
two different randomly-selected instances. Unfortunately, in real-world
applications such as medical image classification, there is often class
imbalance, so randomly-selected instances mostly belong to the same majority
class, which precludes CSSL from learning inter-class differences. To address
this issue, we propose a novel framework, Iterative Self-paced Supervised
Contrastive Learning for MIL Representations (ItS2CLR), which improves the
learned representation by exploiting instance-level pseudo labels derived from
the bag-level labels. The framework employs a novel self-paced sampling
strategy to ensure the accuracy of pseudo labels. We evaluate ItS2CLR on three
medical datasets, showing that it improves the quality of instance-level pseudo
labels and representations, and outperforms existing MIL methods in terms of
both bag and instance level accuracy. Code is available at
https://github.com/Kangningthu/ItS2CLR
Related papers
- Sm: enhanced localization in Multiple Instance Learning for medical imaging classification [11.727293641333713]
Multiple Instance Learning (MIL) is widely used in medical imaging classification to reduce the labeling effort.
We propose a novel, principled, and flexible mechanism to model local dependencies.
Our module leads to state-of-the-art performance in localization while being competitive or superior in classification.
arXiv Detail & Related papers (2024-10-04T09:49:28Z) - Rethinking Multiple Instance Learning: Developing an Instance-Level Classifier via Weakly-Supervised Self-Training [14.16923025335549]
Multiple instance learning (MIL) problem is currently solved from either bag-classification or instance-classification perspective.
We formulate MIL as a semi-supervised instance classification problem, so that all the labeled and unlabeled instances can be fully utilized.
We propose a weakly-supervised self-training method, in which we utilize the positive bag labels to construct a global constraint.
arXiv Detail & Related papers (2024-08-09T01:53:41Z) - SemiReward: A General Reward Model for Semi-supervised Learning [58.47299780978101]
Semi-supervised learning (SSL) has witnessed great progress with various improvements in the self-training framework with pseudo labeling.
Main challenge is how to distinguish high-quality pseudo labels against the confirmation bias.
We propose a Semi-supervised Reward framework (SemiReward) that predicts reward scores to evaluate and filter out high-quality pseudo labels.
arXiv Detail & Related papers (2023-10-04T17:56:41Z) - Rethinking Multiple Instance Learning for Whole Slide Image Classification: A Good Instance Classifier is All You Need [18.832471712088353]
We propose an instance-level weakly supervised contrastive learning algorithm for the first time under the MIL setting.
We also propose an accurate pseudo label generation method through prototype learning.
arXiv Detail & Related papers (2023-07-05T12:44:52Z) - Disambiguated Attention Embedding for Multi-Instance Partial-Label
Learning [68.56193228008466]
In many real-world tasks, the concerned objects can be represented as a multi-instance bag associated with a candidate label set.
Existing MIPL approach follows the instance-space paradigm by assigning augmented candidate label sets of bags to each instance and aggregating bag-level labels from instance-level labels.
We propose an intuitive algorithm named DEMIPL, i.e., Disambiguated attention Embedding for Multi-Instance Partial-Label learning.
arXiv Detail & Related papers (2023-05-26T13:25:17Z) - Multi-Instance Partial-Label Learning: Towards Exploiting Dual Inexact
Supervision [53.530957567507365]
In some real-world tasks, each training sample is associated with a candidate label set that contains one ground-truth label and some false positive labels.
In this paper, we formalize such problems as multi-instance partial-label learning (MIPL)
Existing multi-instance learning algorithms and partial-label learning algorithms are suboptimal for solving MIPL problems.
arXiv Detail & Related papers (2022-12-18T03:28:51Z) - Learning with Partial Labels from Semi-supervised Perspective [28.735185883881172]
Partial Label (PL) learning refers to the task of learning from partially labeled data.
We propose a novel PL learning method, namely Partial Label learning with Semi-Supervised Perspective (PLSP)
PLSP significantly outperforms the existing PL baseline methods, especially on high ambiguity levels.
arXiv Detail & Related papers (2022-11-24T15:12:16Z) - On Non-Random Missing Labels in Semi-Supervised Learning [114.62655062520425]
Semi-Supervised Learning (SSL) is fundamentally a missing label problem.
We explicitly incorporate "class" into SSL.
Our method not only significantly outperforms existing baselines but also surpasses other label bias removal SSL methods.
arXiv Detail & Related papers (2022-06-29T22:01:29Z) - L2B: Learning to Bootstrap Robust Models for Combating Label Noise [52.02335367411447]
This paper introduces a simple and effective method, named Learning to Bootstrap (L2B)
It enables models to bootstrap themselves using their own predictions without being adversely affected by erroneous pseudo-labels.
It achieves this by dynamically adjusting the importance weight between real observed and generated labels, as well as between different samples through meta-learning.
arXiv Detail & Related papers (2022-02-09T05:57:08Z) - Weakly Supervised Contrastive Learning [68.47096022526927]
We introduce a weakly supervised contrastive learning framework (WCL) to tackle this issue.
WCL achieves 65% and 72% ImageNet Top-1 Accuracy using ResNet50, which is even higher than SimCLRv2 with ResNet101.
arXiv Detail & Related papers (2021-10-10T12:03:52Z) - Fast learning from label proportions with small bags [0.0]
In learning from label proportions (LLP), the instances are grouped into bags, and the task is to learn an instance classifier given relative class proportions in training bags.
In this work, we focus on the case of small bags, which allows designing more efficient algorithms by explicitly considering all consistent label combinations.
arXiv Detail & Related papers (2021-10-07T13:11:18Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.