Marginal Thresholding in Noisy Image Segmentation
- URL: http://arxiv.org/abs/2304.04116v3
- Date: Sat, 8 Jul 2023 12:38:15 GMT
- Title: Marginal Thresholding in Noisy Image Segmentation
- Authors: Marcus Nordstr\"om, Henrik Hult, Atsuto Maki
- Abstract summary: It is shown that optimal solutions to the loss functions soft-Dice and cross-entropy diverge as the level of noise increases.
This raises the question whether the decrease in performance seen when using cross-entropy as compared to soft-Dice is caused by using the wrong threshold.
- Score: 3.609538870261841
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: This work presents a study on label noise in medical image segmentation by
considering a noise model based on Gaussian field deformations. Such noise is
of interest because it yields realistic looking segmentations and because it is
unbiased in the sense that the expected deformation is the identity mapping.
Efficient methods for sampling and closed form solutions for the marginal
probabilities are provided. Moreover, theoretically optimal solutions to the
loss functions cross-entropy and soft-Dice are studied and it is shown how they
diverge as the level of noise increases. Based on recent work on loss function
characterization, it is shown that optimal solutions to soft-Dice can be
recovered by thresholding solutions to cross-entropy with a particular a priori
unknown threshold that efficiently can be computed. This raises the question
whether the decrease in performance seen when using cross-entropy as compared
to soft-Dice is caused by using the wrong threshold. The hypothesis is
validated in 5-fold studies on three organ segmentation problems from the
TotalSegmentor data set, using 4 different strengths of noise. The results show
that changing the threshold leads the performance of cross-entropy to go from
systematically worse than soft-Dice to similar or better results than
soft-Dice.
Related papers
- Robust Estimation of Causal Heteroscedastic Noise Models [7.568978862189266]
Student's $t$-distribution is known for its robustness in accounting for sampling variability with smaller sample sizes and extreme values without significantly altering the overall distribution shape.
Our empirical evaluations demonstrate that our estimators are more robust and achieve better overall performance across synthetic and real benchmarks.
arXiv Detail & Related papers (2023-12-15T02:26:35Z) - Learning Unnormalized Statistical Models via Compositional Optimization [73.30514599338407]
Noise-contrastive estimation(NCE) has been proposed by formulating the objective as the logistic loss of the real data and the artificial noise.
In this paper, we study it a direct approach for optimizing the negative log-likelihood of unnormalized models.
arXiv Detail & Related papers (2023-06-13T01:18:16Z) - Noisy Image Segmentation With Soft-Dice [3.2116198597240846]
It is shown that a sequence of soft segmentations converging to optimal soft-Dice also converges to optimal Dice when converted to hard segmentations using thresholding.
This is an important result because soft-Dice is often used as a proxy for maximizing the Dice metric.
arXiv Detail & Related papers (2023-04-03T08:46:56Z) - Optimizing the Noise in Self-Supervised Learning: from Importance
Sampling to Noise-Contrastive Estimation [80.07065346699005]
It is widely assumed that the optimal noise distribution should be made equal to the data distribution, as in Generative Adversarial Networks (GANs)
We turn to Noise-Contrastive Estimation which grounds this self-supervised task as an estimation problem of an energy-based model of the data.
We soberly conclude that the optimal noise may be hard to sample from, and the gain in efficiency can be modest compared to choosing the noise distribution equal to the data's.
arXiv Detail & Related papers (2023-01-23T19:57:58Z) - Treatment Learning Causal Transformer for Noisy Image Classification [62.639851972495094]
In this work, we incorporate this binary information of "existence of noise" as treatment into image classification tasks to improve prediction accuracy.
Motivated from causal variational inference, we propose a transformer-based architecture, that uses a latent generative model to estimate robust feature representations for noise image classification.
We also create new noisy image datasets incorporating a wide range of noise factors for performance benchmarking.
arXiv Detail & Related papers (2022-03-29T13:07:53Z) - Partial Identification with Noisy Covariates: A Robust Optimization
Approach [94.10051154390237]
Causal inference from observational datasets often relies on measuring and adjusting for covariates.
We show that this robust optimization approach can extend a wide range of causal adjustment methods to perform partial identification.
Across synthetic and real datasets, we find that this approach provides ATE bounds with a higher coverage probability than existing methods.
arXiv Detail & Related papers (2022-02-22T04:24:26Z) - Analyzing and Improving the Optimization Landscape of Noise-Contrastive
Estimation [50.85788484752612]
Noise-contrastive estimation (NCE) is a statistically consistent method for learning unnormalized probabilistic models.
It has been empirically observed that the choice of the noise distribution is crucial for NCE's performance.
In this work, we formally pinpoint reasons for NCE's poor performance when an inappropriate noise distribution is used.
arXiv Detail & Related papers (2021-10-21T16:57:45Z) - Differentiable Annealed Importance Sampling and the Perils of Gradient
Noise [68.44523807580438]
Annealed importance sampling (AIS) and related algorithms are highly effective tools for marginal likelihood estimation.
Differentiability is a desirable property as it would admit the possibility of optimizing marginal likelihood as an objective.
We propose a differentiable algorithm by abandoning Metropolis-Hastings steps, which further unlocks mini-batch computation.
arXiv Detail & Related papers (2021-07-21T17:10:14Z) - On the Role of Entropy-based Loss for Learning Causal Structures with
Continuous Optimization [27.613220411996025]
A method with non-combinatorial directed acyclic constraint, called NOTEARS, formulates the causal structure learning problem as a continuous optimization problem using least-square loss.
We show that the violation of the Gaussian noise assumption will hinder the causal direction identification.
We propose a more general entropy-based loss that is theoretically consistent with the likelihood score under any noise distribution.
arXiv Detail & Related papers (2021-06-05T08:29:51Z) - Denoising Score Matching with Random Fourier Features [11.60130641443281]
We derive analytical expression for the Denoising Score matching using the Kernel Exponential Family as a model distribution.
The obtained expression explicitly depends on the noise variance, so the validation loss can be straightforwardly used to tune the noise level.
arXiv Detail & Related papers (2021-01-13T18:02:39Z) - Semantic Neighborhood-Aware Deep Facial Expression Recognition [14.219890078312536]
A novel method is proposed to formulate semantic perturbation and select unreliable samples during training.
Experiments show the effectiveness of the proposed method and state-of-the-art results are reported.
arXiv Detail & Related papers (2020-04-27T11:48:17Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.