Learning to Segment from Noisy Annotations: A Spatial Correction
Approach
- URL: http://arxiv.org/abs/2308.02498v1
- Date: Fri, 21 Jul 2023 00:27:40 GMT
- Title: Learning to Segment from Noisy Annotations: A Spatial Correction
Approach
- Authors: Jiachen Yao, Yikai Zhang, Songzhu Zheng, Mayank Goswami, Prateek
Prasanna, Chao Chen
- Abstract summary: Noisy labels can significantly affect the performance of deep neural networks (DNNs)
We propose a novel Markov model for segmentation noisy annotations that encodes both spatial correlation and bias.
Our approach outperforms current state-of-the-art methods on both synthetic and real-world noisy annotations.
- Score: 12.604673584405385
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Noisy labels can significantly affect the performance of deep neural networks
(DNNs). In medical image segmentation tasks, annotations are error-prone due to
the high demand in annotation time and in the annotators' expertise. Existing
methods mostly assume noisy labels in different pixels are \textit{i.i.d}.
However, segmentation label noise usually has strong spatial correlation and
has prominent bias in distribution. In this paper, we propose a novel Markov
model for segmentation noisy annotations that encodes both spatial correlation
and bias. Further, to mitigate such label noise, we propose a label correction
method to recover true label progressively. We provide theoretical guarantees
of the correctness of the proposed method. Experiments show that our approach
outperforms current state-of-the-art methods on both synthetic and real-world
noisy annotations.
Related papers
- Extracting Clean and Balanced Subset for Noisy Long-tailed Classification [66.47809135771698]
We develop a novel pseudo labeling method using class prototypes from the perspective of distribution matching.
By setting a manually-specific probability measure, we can reduce the side-effects of noisy and long-tailed data simultaneously.
Our method can extract this class-balanced subset with clean labels, which brings effective performance gains for long-tailed classification with label noise.
arXiv Detail & Related papers (2024-04-10T07:34:37Z) - Clean Label Disentangling for Medical Image Segmentation with Noisy
Labels [25.180056839942345]
Current methods focusing on medical image segmentation suffer from incorrect annotations, which is known as the noisy label issue.
We propose a class-balanced sampling strategy to tackle the class-imbalanced problem.
We extend our clean label disentangling framework to a new noisy feature-aided clean label disentangling framework.
arXiv Detail & Related papers (2023-11-28T07:54:27Z) - Category-Adaptive Label Discovery and Noise Rejection for Multi-label
Image Recognition with Partial Positive Labels [78.88007892742438]
Training multi-label models with partial positive labels (MLR-PPL) attracts increasing attention.
Previous works regard unknown labels as negative and adopt traditional MLR algorithms.
We propose to explore semantic correlation among different images to facilitate the MLR-PPL task.
arXiv Detail & Related papers (2022-11-15T02:11:20Z) - Adaptive Early-Learning Correction for Segmentation from Noisy
Annotations [13.962891776039369]
We study the learning dynamics of deep segmentation networks trained on inaccurately-annotated data.
We propose a new method for segmentation from noisy annotations with two key elements.
arXiv Detail & Related papers (2021-10-07T18:46:23Z) - Label Noise in Adversarial Training: A Novel Perspective to Study Robust
Overfitting [45.58217741522973]
We show that label noise exists in adversarial training.
Such label noise is due to the mismatch between the true label distribution of adversarial examples and the label inherited from clean examples.
We propose a method to automatically calibrate the label to address the label noise and robust overfitting.
arXiv Detail & Related papers (2021-10-07T01:15:06Z) - Instance-dependent Label-noise Learning under a Structural Causal Model [92.76400590283448]
Label noise will degenerate the performance of deep learning algorithms.
By leveraging a structural causal model, we propose a novel generative approach for instance-dependent label-noise learning.
arXiv Detail & Related papers (2021-09-07T10:42:54Z) - Superpixel-guided Iterative Learning from Noisy Labels for Medical Image
Segmentation [24.557755528031453]
We develop a robust iterative learning strategy that combines noise-aware training of segmentation network and noisy label refinement.
Experiments on two benchmarks show that our method outperforms recent state-of-the-art approaches.
arXiv Detail & Related papers (2021-07-21T14:27:36Z) - Tackling Instance-Dependent Label Noise via a Universal Probabilistic
Model [80.91927573604438]
This paper proposes a simple yet universal probabilistic model, which explicitly relates noisy labels to their instances.
Experiments on datasets with both synthetic and real-world label noise verify that the proposed method yields significant improvements on robustness.
arXiv Detail & Related papers (2021-01-14T05:43:51Z) - A Second-Order Approach to Learning with Instance-Dependent Label Noise [58.555527517928596]
The presence of label noise often misleads the training of deep neural networks.
We show that the errors in human-annotated labels are more likely to be dependent on the difficulty levels of tasks.
arXiv Detail & Related papers (2020-12-22T06:36:58Z) - Error-Bounded Correction of Noisy Labels [17.510654621245656]
We show that the prediction of a noisy classifier can indeed be a good indicator of whether the label of a training data is clean.
Based on the theoretical result, we propose a novel algorithm that corrects the labels based on the noisy classifier prediction.
We incorporate our label correction algorithm into the training of deep neural networks and train models that achieve superior testing performance on multiple public datasets.
arXiv Detail & Related papers (2020-11-19T19:23:23Z) - Class2Simi: A Noise Reduction Perspective on Learning with Noisy Labels [98.13491369929798]
We propose a framework called Class2Simi, which transforms data points with noisy class labels to data pairs with noisy similarity labels.
Class2Simi is computationally efficient because not only this transformation is on-the-fly in mini-batches, but also it just changes loss on top of model prediction into a pairwise manner.
arXiv Detail & Related papers (2020-06-14T07:55:32Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.