That Label's Got Style: Handling Label Style Bias for Uncertain Image
Segmentation
- URL: http://arxiv.org/abs/2303.15850v1
- Date: Tue, 28 Mar 2023 09:43:16 GMT
- Title: That Label's Got Style: Handling Label Style Bias for Uncertain Image
Segmentation
- Authors: Kilian Zepf, Eike Petersen, Jes Frellsen, Aasa Feragen
- Abstract summary: We show that applying state-of-the-art segmentation uncertainty models on datasets can lead to model bias caused by the different label styles.
We present an updated modelling objective conditioning on labeling style for aleatoric uncertainty estimation, and modify two state-of-the-art-architectures for segmentation uncertainty accordingly.
- Score: 8.363593384698138
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Segmentation uncertainty models predict a distribution over plausible
segmentations for a given input, which they learn from the annotator variation
in the training set. However, in practice these annotations can differ
systematically in the way they are generated, for example through the use of
different labeling tools. This results in datasets that contain both data
variability and differing label styles. In this paper, we demonstrate that
applying state-of-the-art segmentation uncertainty models on such datasets can
lead to model bias caused by the different label styles. We present an updated
modelling objective conditioning on labeling style for aleatoric uncertainty
estimation, and modify two state-of-the-art-architectures for segmentation
uncertainty accordingly. We show with extensive experiments that this method
reduces label style bias, while improving segmentation performance, increasing
the applicability of segmentation uncertainty models in the wild. We curate two
datasets, with annotations in different label styles, which we will make
publicly available along with our code upon publication.
Related papers
- Bridging the Gap between Model Explanations in Partially Annotated
Multi-label Classification [85.76130799062379]
We study how false negative labels affect the model's explanation.
We propose to boost the attribution scores of the model trained with partial labels to make its explanation resemble that of the model trained with full labels.
arXiv Detail & Related papers (2023-04-04T14:00:59Z) - Exploiting Completeness and Uncertainty of Pseudo Labels for Weakly
Supervised Video Anomaly Detection [149.23913018423022]
Weakly supervised video anomaly detection aims to identify abnormal events in videos using only video-level labels.
Two-stage self-training methods have achieved significant improvements by self-generating pseudo labels.
We propose an enhancement framework by exploiting completeness and uncertainty properties for effective self-training.
arXiv Detail & Related papers (2022-12-08T05:53:53Z) - Dist-PU: Positive-Unlabeled Learning from a Label Distribution
Perspective [89.5370481649529]
We propose a label distribution perspective for PU learning in this paper.
Motivated by this, we propose to pursue the label distribution consistency between predicted and ground-truth label distributions.
Experiments on three benchmark datasets validate the effectiveness of the proposed method.
arXiv Detail & Related papers (2022-12-06T07:38:29Z) - Rethinking Generalization: The Impact of Annotation Style on Medical
Image Segmentation [9.056814157662965]
We show that modeling annotation biases, rather than ignoring them, poses a promising way of accounting for differences in annotation style across datasets.
Next, we present an image-conditioning approach to model annotation styles that correlate with specific image features, potentially enabling detection biases to be more easily identified.
arXiv Detail & Related papers (2022-10-31T15:28:49Z) - Scaling up Multi-domain Semantic Segmentation with Sentence Embeddings [81.09026586111811]
We propose an approach to semantic segmentation that achieves state-of-the-art supervised performance when applied in a zero-shot setting.
This is achieved by replacing each class label with a vector-valued embedding of a short paragraph that describes the class.
The resulting merged semantic segmentation dataset of over 2 Million images enables training a model that achieves performance equal to that of state-of-the-art supervised methods on 7 benchmark datasets.
arXiv Detail & Related papers (2022-02-04T07:19:09Z) - Multi-label Classification with Partial Annotations using Class-aware
Selective Loss [14.3159150577502]
Large-scale multi-label classification datasets are commonly partially annotated.
We analyze the partial labeling problem, then propose a solution based on two key ideas.
With our novel approach, we achieve state-of-the-art results on OpenImages dataset.
arXiv Detail & Related papers (2021-10-21T08:10:55Z) - Learning with Noisy Labels by Targeted Relabeling [52.0329205268734]
Crowdsourcing platforms are often used to collect datasets for training deep neural networks.
We propose an approach which reserves a fraction of annotations to explicitly relabel highly probable labeling errors.
arXiv Detail & Related papers (2021-10-15T20:37:29Z) - A Closer Look at Self-training for Zero-Label Semantic Segmentation [53.4488444382874]
Being able to segment unseen classes not observed during training is an important technical challenge in deep learning.
Prior zero-label semantic segmentation works approach this task by learning visual-semantic embeddings or generative models.
We propose a consistency regularizer to filter out noisy pseudo-labels by taking the intersections of the pseudo-labels generated from different augmentations of the same image.
arXiv Detail & Related papers (2021-04-21T14:34:33Z) - Comparing the Value of Labeled and Unlabeled Data in Method-of-Moments
Latent Variable Estimation [17.212805760360954]
We use a framework centered on model misspecification in method-of-moments latent variable estimation.
We then introduce a correction that provably removes this bias in certain cases.
We observe theoretically and with synthetic experiments that for well-specified models, labeled points are worth a constant factor more than unlabeled points.
arXiv Detail & Related papers (2021-03-03T23:52:38Z) - Label Confusion Learning to Enhance Text Classification Models [3.0251266104313643]
Label Confusion Model (LCM) learns label confusion to capture semantic overlap among labels.
LCM can generate a better label distribution to replace the original one-hot label vector.
experiments on five text classification benchmark datasets reveal the effectiveness of LCM for several widely used deep learning classification models.
arXiv Detail & Related papers (2020-12-09T11:34:35Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.