Adaptive Early-Learning Correction for Segmentation from Noisy
Annotations
- URL: http://arxiv.org/abs/2110.03740v1
- Date: Thu, 7 Oct 2021 18:46:23 GMT
- Title: Adaptive Early-Learning Correction for Segmentation from Noisy
Annotations
- Authors: Sheng Liu, Kangning Liu, Weicheng Zhu, Yiqiu Shen, Carlos
Fernandez-Granda
- Abstract summary: We study the learning dynamics of deep segmentation networks trained on inaccurately-annotated data.
We propose a new method for segmentation from noisy annotations with two key elements.
- Score: 13.962891776039369
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Deep learning in the presence of noisy annotations has been studied
extensively in classification, but much less in segmentation tasks. In this
work, we study the learning dynamics of deep segmentation networks trained on
inaccurately-annotated data. We discover a phenomenon that has been previously
reported in the context of classification: the networks tend to first fit the
clean pixel-level labels during an "early-learning" phase, before eventually
memorizing the false annotations. However, in contrast to classification,
memorization in segmentation does not arise simultaneously for all semantic
categories. Inspired by these findings, we propose a new method for
segmentation from noisy annotations with two key elements. First, we detect the
beginning of the memorization phase separately for each category during
training. This allows us to adaptively correct the noisy annotations in order
to exploit early learning. Second, we incorporate a regularization term that
enforces consistency across scales to boost robustness against annotation
noise. Our method outperforms standard approaches on a medical-imaging
segmentation task where noises are synthesized to mimic human annotation
errors. It also provides robustness to realistic noisy annotations present in
weakly-supervised semantic segmentation, achieving state-of-the-art results on
PASCAL VOC 2012.
Related papers
- Revisiting speech segmentation and lexicon learning with better features [29.268728666438495]
We revisit a self-supervised method that segments unlabelled speech into word-like segments.
We start from the two-stage duration-penalised dynamic programming method.
In the first acoustic unit discovery stage, we replace contrastive predictive coding features with HuBERT.
After word segmentation in the second stage, we get an acoustic word embedding for each segment by averaging HuBERT features.
arXiv Detail & Related papers (2024-01-31T15:06:34Z) - Learning to Segment from Noisy Annotations: A Spatial Correction
Approach [12.604673584405385]
Noisy labels can significantly affect the performance of deep neural networks (DNNs)
We propose a novel Markov model for segmentation noisy annotations that encodes both spatial correlation and bias.
Our approach outperforms current state-of-the-art methods on both synthetic and real-world noisy annotations.
arXiv Detail & Related papers (2023-07-21T00:27:40Z) - Learning Context-aware Classifier for Semantic Segmentation [88.88198210948426]
In this paper, contextual hints are exploited via learning a context-aware classifier.
Our method is model-agnostic and can be easily applied to generic segmentation models.
With only negligible additional parameters and +2% inference time, decent performance gain has been achieved on both small and large models.
arXiv Detail & Related papers (2023-03-21T07:00:35Z) - Learning Confident Classifiers in the Presence of Label Noise [5.829762367794509]
This paper proposes a probabilistic model for noisy observations that allows us to build a confident classification and segmentation models.
Our experiments show that our algorithm outperforms state-of-the-art solutions for the considered classification and segmentation problems.
arXiv Detail & Related papers (2023-01-02T04:27:25Z) - Flip Learning: Erase to Segment [65.84901344260277]
Weakly-supervised segmentation (WSS) can help reduce time-consuming and cumbersome manual annotation.
We propose a novel and general WSS framework called Flip Learning, which only needs the box annotation.
Our proposed approach achieves competitive performance and shows great potential to narrow the gap between fully-supervised and weakly-supervised learning.
arXiv Detail & Related papers (2021-08-02T09:56:10Z) - Mining Cross-Image Semantics for Weakly Supervised Semantic Segmentation [128.03739769844736]
Two neural co-attentions are incorporated into the classifier to capture cross-image semantic similarities and differences.
In addition to boosting object pattern learning, the co-attention can leverage context from other related images to improve localization map inference.
Our algorithm sets new state-of-the-arts on all these settings, demonstrating well its efficacy and generalizability.
arXiv Detail & Related papers (2020-07-03T21:53:46Z) - Weakly Supervised Temporal Action Localization with Segment-Level Labels [140.68096218667162]
Temporal action localization presents a trade-off between test performance and annotation-time cost.
We introduce a new segment-level supervision setting: segments are labeled when annotators observe actions happening here.
We devise a partial segment loss regarded as a loss sampling to learn integral action parts from labeled segments.
arXiv Detail & Related papers (2020-07-03T10:32:19Z) - Early-Learning Regularization Prevents Memorization of Noisy Labels [29.04549895470588]
We propose a novel framework to perform classification via deep learning in the presence of noisy annotations.
Deep neural networks have been observed to first fit the training data with clean labels during an "early learning" phase.
We design a regularization term that steers the model towards these targets, implicitly preventing memorization of the false labels.
arXiv Detail & Related papers (2020-06-30T23:46:33Z) - DenoiSeg: Joint Denoising and Segmentation [75.91760529986958]
We propose DenoiSeg, a new method that can be trained end-to-end on only a few annotated ground truth segmentations.
We achieve this by extending Noise2Void, a self-supervised denoising scheme that can be trained on noisy images alone, to also predict dense 3-class segmentations.
arXiv Detail & Related papers (2020-05-06T17:42:54Z) - Self-Supervised Tuning for Few-Shot Segmentation [82.32143982269892]
Few-shot segmentation aims at assigning a category label to each image pixel with few annotated samples.
Existing meta-learning method tends to fail in generating category-specifically discriminative descriptor when the visual features extracted from support images are marginalized in embedding space.
This paper presents an adaptive framework tuning, in which the distribution of latent features across different episodes is dynamically adjusted based on a self-segmentation scheme.
arXiv Detail & Related papers (2020-04-12T03:53:53Z) - Discovering Latent Classes for Semi-Supervised Semantic Segmentation [18.5909667833129]
This paper studies the problem of semi-supervised semantic segmentation.
We learn latent classes consistent with semantic classes on labeled images.
We show that the proposed method achieves state of the art results for semi-supervised semantic segmentation.
arXiv Detail & Related papers (2019-12-30T14:16:24Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.