On the pitfalls of entropy-based uncertainty for multi-class
semi-supervised segmentation
- URL: http://arxiv.org/abs/2203.03587v1
- Date: Mon, 7 Mar 2022 18:35:17 GMT
- Title: On the pitfalls of entropy-based uncertainty for multi-class
semi-supervised segmentation
- Authors: Martin Van Waerebeke, Gregory Lodygensky and Jose Dolz
- Abstract summary: Semi-supervised learning has emerged as an appealing strategy to train deep models with limited supervision.
We demonstrate in this work that this strategy leads to suboptimal results in a multi-class context.
We propose an alternative solution to compute the uncertainty in a multi-class setting, based on divergence distances and which account for inter-class overlap.
- Score: 8.464487190628395
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Semi-supervised learning has emerged as an appealing strategy to train deep
models with limited supervision. Most prior literature under this learning
paradigm resorts to dual-based architectures, typically composed of a
teacher-student duple. To drive the learning of the student, many of these
models leverage the aleatoric uncertainty derived from the entropy of the
predictions. While this has shown to work well in a binary scenario, we
demonstrate in this work that this strategy leads to suboptimal results in a
multi-class context, a more realistic and challenging setting. We argue,
indeed, that these approaches underperform due to the erroneous uncertainty
approximations in the presence of inter-class overlap. Furthermore, we propose
an alternative solution to compute the uncertainty in a multi-class setting,
based on divergence distances and which account for inter-class overlap. We
evaluate the proposed solution on a challenging multi-class segmentation
dataset and in two well-known uncertainty-based segmentation methods. The
reported results demonstrate that by simply replacing the mechanism used to
compute the uncertainty, our proposed solution brings substantial improvement
on tested setups.
Related papers
- A Review of Bayesian Uncertainty Quantification in Deep Probabilistic Image Segmentation [0.0]
Advancements in image segmentation play an integral role within the greater scope of Deep Learning-based computer vision.
Uncertainty quantification has been extensively studied within this context, enabling expression of model ignorance (epistemic uncertainty) or data ambiguity (aleatoric uncertainty) to prevent uninformed decision making.
This work provides a comprehensive overview of probabilistic segmentation by discussing fundamental concepts in uncertainty that govern advancements in the field and the application to various tasks.
arXiv Detail & Related papers (2024-11-25T13:26:09Z) - Inter- and intra-uncertainty based feature aggregation model for semi-supervised histopathology image segmentation [21.973620376753594]
hierarchical prediction uncertainty within the student model (intra-uncertainty) and image prediction uncertainty (inter-uncertainty) have not been fully utilized by existing methods.
We propose a novel inter- and intra-uncertainty regularization method to measure and constrain both inter- and intra-inconsistencies in the teacher-student architecture.
We also propose a new two-stage network with pseudo-mask guided feature aggregation (PG-FANet) as the segmentation model.
arXiv Detail & Related papers (2024-03-19T14:32:21Z) - On the Complexity of Adversarial Decision Making [101.14158787665252]
We show that the Decision-Estimation Coefficient is necessary and sufficient to obtain low regret for adversarial decision making.
We provide new structural results that connect the Decision-Estimation Coefficient to variants of other well-known complexity measures.
arXiv Detail & Related papers (2022-06-27T06:20:37Z) - Risk Consistent Multi-Class Learning from Label Proportions [64.0125322353281]
This study addresses a multiclass learning from label proportions (MCLLP) setting in which training instances are provided in bags.
Most existing MCLLP methods impose bag-wise constraints on the prediction of instances or assign them pseudo-labels.
A risk-consistent method is proposed for instance classification using the empirical risk minimization framework.
arXiv Detail & Related papers (2022-03-24T03:49:04Z) - Semi-supervised Domain Adaptive Structure Learning [72.01544419893628]
Semi-supervised domain adaptation (SSDA) is a challenging problem requiring methods to overcome both 1) overfitting towards poorly annotated data and 2) distribution shift across domains.
We introduce an adaptive structure learning method to regularize the cooperation of SSL and DA.
arXiv Detail & Related papers (2021-12-12T06:11:16Z) - MCDAL: Maximum Classifier Discrepancy for Active Learning [74.73133545019877]
Recent state-of-the-art active learning methods have mostly leveraged Generative Adversarial Networks (GAN) for sample acquisition.
We propose in this paper a novel active learning framework that we call Maximum Discrepancy for Active Learning (MCDAL)
In particular, we utilize two auxiliary classification layers that learn tighter decision boundaries by maximizing the discrepancies among them.
arXiv Detail & Related papers (2021-07-23T06:57:08Z) - An Effective Baseline for Robustness to Distributional Shift [5.627346969563955]
Refraining from confidently predicting when faced with categories of inputs different from those seen during training is an important requirement for the safe deployment of deep learning systems.
We present a simple, but highly effective approach to deal with out-of-distribution detection that uses the principle of abstention.
arXiv Detail & Related papers (2021-05-15T00:46:11Z) - Proximal Causal Learning with Kernels: Two-Stage Estimation and Moment
Restriction [39.51144507601913]
We focus on the proximal causal learning setting, but our methods can be used to solve a wider class of inverse problems characterised by a Fredholm integral equation.
We provide consistency guarantees for each algorithm, and we demonstrate these approaches achieve competitive results on synthetic data and data simulating a real-world task.
arXiv Detail & Related papers (2021-05-10T17:52:48Z) - Deep Clustering by Semantic Contrastive Learning [67.28140787010447]
We introduce a novel variant called Semantic Contrastive Learning (SCL)
It explores the characteristics of both conventional contrastive learning and deep clustering.
It can amplify the strengths of contrastive learning and deep clustering in a unified approach.
arXiv Detail & Related papers (2021-03-03T20:20:48Z) - Contradictory Structure Learning for Semi-supervised Domain Adaptation [67.89665267469053]
Current adversarial adaptation methods attempt to align the cross-domain features.
Two challenges remain unsolved: 1) the conditional distribution mismatch and 2) the bias of the decision boundary towards the source domain.
We propose a novel framework for semi-supervised domain adaptation by unifying the learning of opposite structures.
arXiv Detail & Related papers (2020-02-06T22:58:20Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.