Contrastive Cross-Bag Augmentation for Multiple Instance Learning-based Whole Slide Image Classification
- URL: http://arxiv.org/abs/2508.03081v1
- Date: Tue, 05 Aug 2025 04:54:49 GMT
- Title: Contrastive Cross-Bag Augmentation for Multiple Instance Learning-based Whole Slide Image Classification
- Authors: Bo Zhang, Xu Xinan, Shuo Yan, Yu Bai, Zheng Zhang, Wufan Wang, Wendong Wang,
- Abstract summary: We propose Contrastive Cross-Bag Augmentation ($C2Aug$) to sample instances from all bags with the same class to increase the diversity of pseudo-bags.<n>New instances into the pseudo-bag increases the number of critical instances (e.g., tumor instances)<n>This increase results in a reduced occurrence of pseudo-bags containing few critical instances, thereby limiting model performance.<n>We introduce a bag-level and group-level contrastive learning framework to enhance the discrimination of features with distinct semantic meanings.
- Score: 22.715117957704052
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Recent pseudo-bag augmentation methods for Multiple Instance Learning (MIL)-based Whole Slide Image (WSI) classification sample instances from a limited number of bags, resulting in constrained diversity. To address this issue, we propose Contrastive Cross-Bag Augmentation ($C^2Aug$) to sample instances from all bags with the same class to increase the diversity of pseudo-bags. However, introducing new instances into the pseudo-bag increases the number of critical instances (e.g., tumor instances). This increase results in a reduced occurrence of pseudo-bags containing few critical instances, thereby limiting model performance, particularly on test slides with small tumor areas. To address this, we introduce a bag-level and group-level contrastive learning framework to enhance the discrimination of features with distinct semantic meanings, thereby improving model performance. Experimental results demonstrate that $C^2Aug$ consistently outperforms state-of-the-art approaches across multiple evaluation metrics.
Related papers
- Nearly Optimal Sample Complexity for Learning with Label Proportions [54.67830198790247]
We investigate Learning from Label Proportions (LLP), a partial information setting where examples in a training set are grouped into bags.<n>Despite the partial observability, the goal is still to achieve small regret at the level of individual examples.<n>We give results on the sample complexity of LLP under square loss, showing that our sample complexity is essentially optimal.
arXiv Detail & Related papers (2025-05-08T15:45:23Z) - cDP-MIL: Robust Multiple Instance Learning via Cascaded Dirichlet Process [23.266122629592807]
Multiple instance learning (MIL) has been extensively applied to whole slide histoparametric image (WSI) analysis.
The existing aggregation strategy in MIL, which primarily relies on the first-order distance between instances, fails to accurately approximate the true feature distribution of each instance.
We propose a new Bayesian nonparametric framework for multiple instance learning, which adopts a cascade of Dirichlet processes (cDP) to incorporate the instance-to-bag characteristic of the WSIs.
arXiv Detail & Related papers (2024-07-16T07:28:39Z) - Dynamic Policy-Driven Adaptive Multi-Instance Learning for Whole Slide
Image Classification [26.896926631411652]
Multi-Instance Learning (MIL) has shown impressive performance for histopathology whole slide image (WSI) analysis using bags or pseudo-bags.
Existing MIL-based technologies at least suffer from one or more of the following problems: 1) requiring high storage and intensive pre-processing for numerous instances (sampling); 2) potential over-fitting with limited knowledge to predict bag labels (feature representation); 3) pseudo-bag counts and prior biases affect model robustness and generalizability (decision-making)
arXiv Detail & Related papers (2024-03-09T04:43:24Z) - Rethinking Multiple Instance Learning for Whole Slide Image Classification: A Good Instance Classifier is All You Need [18.832471712088353]
We propose an instance-level weakly supervised contrastive learning algorithm for the first time under the MIL setting.
We also propose an accurate pseudo label generation method through prototype learning.
arXiv Detail & Related papers (2023-07-05T12:44:52Z) - BEL: A Bag Embedding Loss for Transformer enhances Multiple Instance
Whole Slide Image Classification [39.53132774980783]
Bag Embedding Loss (BEL) forces the model to learn a discriminative bag-level representation by minimizing the distance between bag embeddings of the same class and maximizing the distance between different classes.
We show that with BEL, TransMIL outperforms the baseline models on both datasets.
arXiv Detail & Related papers (2023-03-02T16:02:55Z) - Multiple Instance Learning via Iterative Self-Paced Supervised
Contrastive Learning [22.07044031105496]
Learning representations for individual instances when only bag-level labels are available is a challenge in multiple instance learning (MIL)
We propose a novel framework, Iterative Self-paced Supervised Contrastive Learning for MIL Representations (ItS2CLR)
It improves the learned representation by exploiting instance-level pseudo labels derived from the bag-level labels.
arXiv Detail & Related papers (2022-10-17T21:43:32Z) - Weakly Supervised Contrastive Learning [68.47096022526927]
We introduce a weakly supervised contrastive learning framework (WCL) to tackle this issue.
WCL achieves 65% and 72% ImageNet Top-1 Accuracy using ResNet50, which is even higher than SimCLRv2 with ResNet101.
arXiv Detail & Related papers (2021-10-10T12:03:52Z) - Breadcrumbs: Adversarial Class-Balanced Sampling for Long-tailed
Recognition [95.93760490301395]
The problem of long-tailed recognition, where the number of examples per class is highly unbalanced, is considered.
It is hypothesized that this is due to the repeated sampling of examples and can be addressed by feature space augmentation.
A new feature augmentation strategy, EMANATE, based on back-tracking of features across epochs during training, is proposed.
A new sampling procedure, Breadcrumb, is then introduced to implement adversarial class-balanced sampling without extra computation.
arXiv Detail & Related papers (2021-05-01T00:21:26Z) - Doubly Contrastive Deep Clustering [135.7001508427597]
We present a novel Doubly Contrastive Deep Clustering (DCDC) framework, which constructs contrastive loss over both sample and class views.
Specifically, for the sample view, we set the class distribution of the original sample and its augmented version as positive sample pairs.
For the class view, we build the positive and negative pairs from the sample distribution of the class.
In this way, two contrastive losses successfully constrain the clustering results of mini-batch samples in both sample and class level.
arXiv Detail & Related papers (2021-03-09T15:15:32Z) - Contrastive Learning with Adversarial Examples [79.39156814887133]
Contrastive learning (CL) is a popular technique for self-supervised learning (SSL) of visual representations.
This paper introduces a new family of adversarial examples for constrastive learning and using these examples to define a new adversarial training algorithm for SSL, denoted as CLAE.
arXiv Detail & Related papers (2020-10-22T20:45:10Z) - Sparse Network Inversion for Key Instance Detection in Multiple Instance
Learning [24.66638752977373]
Multiple Instance Learning (MIL) involves predicting a single label for a bag of instances, given positive or negative labels at bag-level.
The attention-based deep MIL model is a recent advance in both bag-level classification and key instance detection.
We present a method to improve the attention-based deep MIL model in the task of KID.
arXiv Detail & Related papers (2020-09-07T07:01:59Z) - K-Shot Contrastive Learning of Visual Features with Multiple Instance
Augmentations [67.46036826589467]
$K$-Shot Contrastive Learning is proposed to investigate sample variations within individual instances.
It aims to combine the advantages of inter-instance discrimination by learning discriminative features to distinguish between different instances.
Experiment results demonstrate the proposed $K$-shot contrastive learning achieves superior performances to the state-of-the-art unsupervised methods.
arXiv Detail & Related papers (2020-07-27T04:56:41Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.