Compensation Learning in Semantic Segmentation
- URL: http://arxiv.org/abs/2304.13428v1
- Date: Wed, 26 Apr 2023 10:26:11 GMT
- Title: Compensation Learning in Semantic Segmentation
- Authors: Timo Kaiser, Christoph Reinders, Bodo Rosenhahn
- Abstract summary: We propose Compensation Learning in Semanticscapes, a framework to identify and compensate ambiguities as well as label noise.
We introduce a novel uncertainty branch for neural networks to induce the compensation bias only to relevant regions.
Our method is employed into state-of-the-art segmentation frameworks and several experiments demonstrate that our proposed compensation learns inter-class relations.
- Score: 22.105356244579745
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Label noise and ambiguities between similar classes are challenging problems
in developing new models and annotating new data for semantic segmentation. In
this paper, we propose Compensation Learning in Semantic Segmentation, a
framework to identify and compensate ambiguities as well as label noise. More
specifically, we add a ground truth depending and globally learned bias to the
classification logits and introduce a novel uncertainty branch for neural
networks to induce the compensation bias only to relevant regions. Our method
is employed into state-of-the-art segmentation frameworks and several
experiments demonstrate that our proposed compensation learns inter-class
relations that allow global identification of challenging ambiguities as well
as the exact localization of subsequent label noise. Additionally, it enlarges
robustness against label noise during training and allows target-oriented
manipulation during inference. We evaluate the proposed method on %the widely
used datasets Cityscapes, KITTI-STEP, ADE20k, and COCO-stuff10k.
Related papers
- Learning with Complementary Labels Revisited: The Selected-Completely-at-Random Setting Is More Practical [66.57396042747706]
Complementary-label learning is a weakly supervised learning problem.
We propose a consistent approach that does not rely on the uniform distribution assumption.
We find that complementary-label learning can be expressed as a set of negative-unlabeled binary classification problems.
arXiv Detail & Related papers (2023-11-27T02:59:17Z) - Semi-Supervised Semantic Segmentation With Region Relevance [28.92449538610617]
Semi-supervised semantic segmentation aims to learn from a small amount of labeled data and plenty of unlabeled ones.
The most common approach is to generate pseudo-labels for unlabeled images to augment the training data.
This paper proposes a Region Relevance Network (RRN) to alleviate the problem mentioned above.
arXiv Detail & Related papers (2023-04-23T04:51:27Z) - Class Enhancement Losses with Pseudo Labels for Zero-shot Semantic
Segmentation [40.09476732999614]
Mask proposal models have significantly improved the performance of zero-shot semantic segmentation.
The use of a background' embedding during training in these methods is problematic as the resulting model tends to over-learn and assign all unseen classes as the background class instead of their correct labels.
This paper proposes novel class enhancement losses to bypass the use of the background embbedding during training, and simultaneously exploit the semantic relationship between text embeddings and mask proposals by ranking the similarity scores.
arXiv Detail & Related papers (2023-01-18T06:55:02Z) - Learning Confident Classifiers in the Presence of Label Noise [5.829762367794509]
This paper proposes a probabilistic model for noisy observations that allows us to build a confident classification and segmentation models.
Our experiments show that our algorithm outperforms state-of-the-art solutions for the considered classification and segmentation problems.
arXiv Detail & Related papers (2023-01-02T04:27:25Z) - Multi-class Label Noise Learning via Loss Decomposition and Centroid
Estimation [25.098485298561155]
We propose a novel multi-class robust learning method for Loss Decomposition and Centroid Estimation (LDCE)
Specifically, we decompose the commonly adopted loss function into a label-dependent part and a label-independent part.
By defining a new form of data centroid, we transform the recovery problem of a label-dependent part to a centroid estimation problem.
arXiv Detail & Related papers (2022-03-21T10:28:50Z) - Learning with Neighbor Consistency for Noisy Labels [69.83857578836769]
We present a method for learning from noisy labels that leverages similarities between training examples in feature space.
We evaluate our method on datasets evaluating both synthetic (CIFAR-10, CIFAR-100) and realistic (mini-WebVision, Clothing1M, mini-ImageNet-Red) noise.
arXiv Detail & Related papers (2022-02-04T15:46:27Z) - Adaptive Affinity Loss and Erroneous Pseudo-Label Refinement for Weakly
Supervised Semantic Segmentation [48.294903659573585]
In this paper, we propose to embed affinity learning of multi-stage approaches in a single-stage model.
A deep neural network is used to deliver comprehensive semantic information in the training phase.
Experiments are conducted on the PASCAL VOC 2012 dataset to evaluate the effectiveness of our proposed approach.
arXiv Detail & Related papers (2021-08-03T07:48:33Z) - Tackling Instance-Dependent Label Noise via a Universal Probabilistic
Model [80.91927573604438]
This paper proposes a simple yet universal probabilistic model, which explicitly relates noisy labels to their instances.
Experiments on datasets with both synthetic and real-world label noise verify that the proposed method yields significant improvements on robustness.
arXiv Detail & Related papers (2021-01-14T05:43:51Z) - Deep Semi-supervised Knowledge Distillation for Overlapping Cervical
Cell Instance Segmentation [54.49894381464853]
We propose to leverage both labeled and unlabeled data for instance segmentation with improved accuracy by knowledge distillation.
We propose a novel Mask-guided Mean Teacher framework with Perturbation-sensitive Sample Mining.
Experiments show that the proposed method improves the performance significantly compared with the supervised method learned from labeled data only.
arXiv Detail & Related papers (2020-07-21T13:27:09Z) - Progressive Identification of True Labels for Partial-Label Learning [112.94467491335611]
Partial-label learning (PLL) is a typical weakly supervised learning problem, where each training instance is equipped with a set of candidate labels among which only one is the true label.
Most existing methods elaborately designed as constrained optimizations that must be solved in specific manners, making their computational complexity a bottleneck for scaling up to big data.
This paper proposes a novel framework of classifier with flexibility on the model and optimization algorithm.
arXiv Detail & Related papers (2020-02-19T08:35:15Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.