Contrastive-Based Deep Embeddings for Label Noise-Resilient Histopathology Image Classification
- URL: http://arxiv.org/abs/2404.07605v1
- Date: Thu, 11 Apr 2024 09:47:52 GMT
- Title: Contrastive-Based Deep Embeddings for Label Noise-Resilient Histopathology Image Classification
- Authors: Lucas Dedieu, Nicolas Nerrienet, Adrien Nivaggioli, Clara Simmat, Marceau Clavel, Arnaud Gauthier, Stéphane Sockeel, Rémy Peyret,
- Abstract summary: noisy labels represent a critical challenge in histopathology image classification.
Deep neural networks can easily overfit label noise, leading to severe degradations in model performance.
We exhibit the label noise resilience property of embeddings extracted from foundation models trained in a self-supervised contrastive manner.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Recent advancements in deep learning have proven highly effective in medical image classification, notably within histopathology. However, noisy labels represent a critical challenge in histopathology image classification, where accurate annotations are vital for training robust deep learning models. Indeed, deep neural networks can easily overfit label noise, leading to severe degradations in model performance. While numerous public pathology foundation models have emerged recently, none have evaluated their resilience to label noise. Through thorough empirical analyses across multiple datasets, we exhibit the label noise resilience property of embeddings extracted from foundation models trained in a self-supervised contrastive manner. We demonstrate that training with such embeddings substantially enhances label noise robustness when compared to non-contrastive-based ones as well as commonly used noise-resilient methods. Our results unequivocally underline the superiority of contrastive learning in effectively mitigating the label noise challenge. Code is publicly available at https://github.com/LucasDedieu/NoiseResilientHistopathology.
Related papers
- NoisyAG-News: A Benchmark for Addressing Instance-Dependent Noise in Text Classification [7.464154519547575]
Existing research on learning with noisy labels predominantly focuses on synthetic noise patterns.
We constructed a benchmark dataset to better understand label noise in real-world text classification settings.
Our findings reveal that while pre-trained models are resilient to synthetic noise, they struggle against instance-dependent noise.
arXiv Detail & Related papers (2024-07-09T06:18:40Z) - Label-noise-tolerant medical image classification via self-attention and
self-supervised learning [5.6827706625306345]
We propose a noise-robust training approach to mitigate the adverse effects of noisy labels in medical image classification.
Specifically, we incorporate contrastive learning and intra-group attention mixup strategies into the vanilla supervised learning.
Rigorous experiments validate that our noise-robust method with contrastive learning and attention mixup can effectively handle with label noise.
arXiv Detail & Related papers (2023-06-16T09:37:16Z) - Robust Medical Image Classification from Noisy Labeled Data with Global
and Local Representation Guided Co-training [73.60883490436956]
We propose a novel collaborative training paradigm with global and local representation learning for robust medical image classification.
We employ the self-ensemble model with a noisy label filter to efficiently select the clean and noisy samples.
We also design a novel global and local representation learning scheme to implicitly regularize the networks to utilize noisy samples.
arXiv Detail & Related papers (2022-05-10T07:50:08Z) - Hard Sample Aware Noise Robust Learning for Histopathology Image
Classification [4.75542005200538]
We introduce a novel hard sample aware noise robust learning method for histopathology image classification.
To distinguish the informative hard samples from the harmful noisy ones, we build an easy/hard/noisy (EHN) detection model.
We propose a noise suppressing and hard enhancing (NSHE) scheme to train the noise robust model.
arXiv Detail & Related papers (2021-12-05T11:07:55Z) - Improving Medical Image Classification with Label Noise Using
Dual-uncertainty Estimation [72.0276067144762]
We discuss and define the two common types of label noise in medical images.
We propose an uncertainty estimation-based framework to handle these two label noise amid the medical image classification task.
arXiv Detail & Related papers (2021-02-28T14:56:45Z) - Tackling Instance-Dependent Label Noise via a Universal Probabilistic
Model [80.91927573604438]
This paper proposes a simple yet universal probabilistic model, which explicitly relates noisy labels to their instances.
Experiments on datasets with both synthetic and real-world label noise verify that the proposed method yields significant improvements on robustness.
arXiv Detail & Related papers (2021-01-14T05:43:51Z) - Noisy Labels Can Induce Good Representations [53.47668632785373]
We study how architecture affects learning with noisy labels.
We show that training with noisy labels can induce useful hidden representations, even when the model generalizes poorly.
This finding leads to a simple method to improve models trained on noisy labels.
arXiv Detail & Related papers (2020-12-23T18:58:05Z) - Attention-Aware Noisy Label Learning for Image Classification [97.26664962498887]
Deep convolutional neural networks (CNNs) learned on large-scale labeled samples have achieved remarkable progress in computer vision.
The cheapest way to obtain a large body of labeled visual data is to crawl from websites with user-supplied labels, such as Flickr.
This paper proposes the attention-aware noisy label learning approach to improve the discriminative capability of the network trained on datasets with potential label noise.
arXiv Detail & Related papers (2020-09-30T15:45:36Z) - Learning Not to Learn in the Presence of Noisy Labels [104.7655376309784]
We show that a new class of loss functions called the gambler's loss provides strong robustness to label noise across various levels of corruption.
We show that training with this loss function encourages the model to "abstain" from learning on the data points with noisy labels.
arXiv Detail & Related papers (2020-02-16T09:12:27Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.