Distilling effective supervision for robust medical image segmentation
with noisy labels
- URL: http://arxiv.org/abs/2106.11099v1
- Date: Mon, 21 Jun 2021 13:33:38 GMT
- Title: Distilling effective supervision for robust medical image segmentation
with noisy labels
- Authors: Jialin Shi and Ji Wu
- Abstract summary: We propose a novel framework to address segmenting with noisy labels by distilling effective supervision information from both pixel and image levels.
In particular, we explicitly estimate the uncertainty of every pixel as pixel-wise noise estimation.
We present an image-level robust learning method to accommodate more information as the complements to pixel-level learning.
- Score: 21.68138582276142
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Despite the success of deep learning methods in medical image segmentation
tasks, the human-level performance relies on massive training data with
high-quality annotations, which are expensive and time-consuming to collect.
The fact is that there exist low-quality annotations with label noise, which
leads to suboptimal performance of learned models. Two prominent directions for
segmentation learning with noisy labels include pixel-wise noise robust
training and image-level noise robust training. In this work, we propose a
novel framework to address segmenting with noisy labels by distilling effective
supervision information from both pixel and image levels. In particular, we
explicitly estimate the uncertainty of every pixel as pixel-wise noise
estimation, and propose pixel-wise robust learning by using both the original
labels and pseudo labels. Furthermore, we present an image-level robust
learning method to accommodate more information as the complements to
pixel-level learning. We conduct extensive experiments on both simulated and
real-world noisy datasets. The results demonstrate the advantageous performance
of our method compared to state-of-the-art baselines for medical image
segmentation with noisy labels.
Related papers
- Learning Camouflaged Object Detection from Noisy Pseudo Label [60.9005578956798]
This paper introduces the first weakly semi-supervised Camouflaged Object Detection (COD) method.
It aims for budget-efficient and high-precision camouflaged object segmentation with an extremely limited number of fully labeled images.
We propose a noise correction loss that facilitates the model's learning of correct pixels in the early learning stage.
When using only 20% of fully labeled data, our method shows superior performance over the state-of-the-art methods.
arXiv Detail & Related papers (2024-07-18T04:53:51Z) - Robust Medical Image Classification from Noisy Labeled Data with Global
and Local Representation Guided Co-training [73.60883490436956]
We propose a novel collaborative training paradigm with global and local representation learning for robust medical image classification.
We employ the self-ensemble model with a noisy label filter to efficiently select the clean and noisy samples.
We also design a novel global and local representation learning scheme to implicitly regularize the networks to utilize noisy samples.
arXiv Detail & Related papers (2022-05-10T07:50:08Z) - Semi-supervised Contrastive Learning for Label-efficient Medical Image
Segmentation [11.935891325600952]
We propose a supervised local contrastive loss that leverages limited pixel-wise annotation to force pixels with the same label to gather around in the embedding space.
With different amounts of labeled data, our methods consistently outperform the state-of-the-art contrast-based methods and other semi-supervised learning techniques.
arXiv Detail & Related papers (2021-09-15T16:23:48Z) - Superpixel-guided Iterative Learning from Noisy Labels for Medical Image
Segmentation [24.557755528031453]
We develop a robust iterative learning strategy that combines noise-aware training of segmentation network and noisy label refinement.
Experiments on two benchmarks show that our method outperforms recent state-of-the-art approaches.
arXiv Detail & Related papers (2021-07-21T14:27:36Z) - Mixed Supervision Learning for Whole Slide Image Classification [88.31842052998319]
We propose a mixed supervision learning framework for super high-resolution images.
During the patch training stage, this framework can make use of coarse image-level labels to refine self-supervised learning.
A comprehensive strategy is proposed to suppress pixel-level false positives and false negatives.
arXiv Detail & Related papers (2021-07-02T09:46:06Z) - Self-Ensembling Contrastive Learning for Semi-Supervised Medical Image
Segmentation [6.889911520730388]
We aim to boost the performance of semi-supervised learning for medical image segmentation with limited labels.
We learn latent representations directly at feature-level by imposing contrastive loss on unlabeled images.
We conduct experiments on an MRI and a CT segmentation dataset and demonstrate that the proposed method achieves state-of-the-art performance.
arXiv Detail & Related papers (2021-05-27T03:27:58Z) - Annotation-Efficient Learning for Medical Image Segmentation based on
Noisy Pseudo Labels and Adversarial Learning [12.781598229608983]
We propose an annotation-efficient learning framework for medical image segmentation.
We use an improved Cycle-Consistent Generative Adversarial Network (GAN) to learn from a set of unpaired medical images and auxiliary masks.
We validated our framework with two situations: objects with a simple shape model like optic disc in fundus images and fetal head in ultrasound images, and complex structures like lung in X-Ray images and liver in CT images.
arXiv Detail & Related papers (2020-12-29T03:22:41Z) - Attention-Aware Noisy Label Learning for Image Classification [97.26664962498887]
Deep convolutional neural networks (CNNs) learned on large-scale labeled samples have achieved remarkable progress in computer vision.
The cheapest way to obtain a large body of labeled visual data is to crawl from websites with user-supplied labels, such as Flickr.
This paper proposes the attention-aware noisy label learning approach to improve the discriminative capability of the network trained on datasets with potential label noise.
arXiv Detail & Related papers (2020-09-30T15:45:36Z) - Data-driven Meta-set Based Fine-Grained Visual Classification [61.083706396575295]
We propose a data-driven meta-set based approach to deal with noisy web images for fine-grained recognition.
Specifically, guided by a small amount of clean meta-set, we train a selection net in a meta-learning manner to distinguish in- and out-of-distribution noisy images.
arXiv Detail & Related papers (2020-08-06T03:04:16Z) - Learning Not to Learn in the Presence of Noisy Labels [104.7655376309784]
We show that a new class of loss functions called the gambler's loss provides strong robustness to label noise across various levels of corruption.
We show that training with this loss function encourages the model to "abstain" from learning on the data points with noisy labels.
arXiv Detail & Related papers (2020-02-16T09:12:27Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.