On Image Segmentation With Noisy Labels: Characterization and Volume
Properties of the Optimal Solutions to Accuracy and Dice
- URL: http://arxiv.org/abs/2206.06484v4
- Date: Fri, 31 Mar 2023 13:20:56 GMT
- Title: On Image Segmentation With Noisy Labels: Characterization and Volume
Properties of the Optimal Solutions to Accuracy and Dice
- Authors: Marcus Nordstr\"om, Henrik Hult, Jonas S\"oderberg, Fredrik L\"ofman
- Abstract summary: We study two of the most popular performance metrics in medical image segmentation, Accuracy and Dice, when the target labels are noisy.
For both metrics, several statements related to characterization and volume properties of the set of optimal segmentations are proved.
Our main insights are: (i) the volume of the solutions to both metrics may deviate significantly from the expected volume of the target, (ii) the volume of a solution to Accuracy is always less than or equal to the volume of a solution to Dice and (iii) the optimal solutions to both of these metrics coincide when the set of feasible segmentations is constrained to
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We study two of the most popular performance metrics in medical image
segmentation, Accuracy and Dice, when the target labels are noisy. For both
metrics, several statements related to characterization and volume properties
of the set of optimal segmentations are proved, and associated experiments are
provided. Our main insights are: (i) the volume of the solutions to both
metrics may deviate significantly from the expected volume of the target, (ii)
the volume of a solution to Accuracy is always less than or equal to the volume
of a solution to Dice and (iii) the optimal solutions to both of these metrics
coincide when the set of feasible segmentations is constrained to the set of
segmentations with the volume equal to the expected volume of the target.
Related papers
- Transformer-based end-to-end classification of variable-length
volumetric data [4.053910482393197]
We propose an end-to-end Transformer-based framework that allows to classify data of variable length in an efficient fashion.
We evaluate the proposed approach in retinal OCT volume classification and achieved 21.96% average improvement on a 9-class diagnostic task.
arXiv Detail & Related papers (2023-07-13T10:19:04Z) - Marginal Thresholding in Noisy Image Segmentation [3.609538870261841]
It is shown that optimal solutions to the loss functions soft-Dice and cross-entropy diverge as the level of noise increases.
This raises the question whether the decrease in performance seen when using cross-entropy as compared to soft-Dice is caused by using the wrong threshold.
arXiv Detail & Related papers (2023-04-08T22:27:36Z) - Noisy Image Segmentation With Soft-Dice [3.2116198597240846]
It is shown that a sequence of soft segmentations converging to optimal soft-Dice also converges to optimal Dice when converted to hard segmentations using thresholding.
This is an important result because soft-Dice is often used as a proxy for maximizing the Dice metric.
arXiv Detail & Related papers (2023-04-03T08:46:56Z) - Variable Importance Matching for Causal Inference [73.25504313552516]
We describe a general framework called Model-to-Match that achieves these goals.
Model-to-Match uses variable importance measurements to construct a distance metric.
We operationalize the Model-to-Match framework with LASSO.
arXiv Detail & Related papers (2023-02-23T00:43:03Z) - On Calibrating Semantic Segmentation Models: Analyses and An Algorithm [51.85289816613351]
We study the problem of semantic segmentation calibration.
Model capacity, crop size, multi-scale testing, and prediction correctness have impact on calibration.
We propose a simple, unifying, and effective approach, namely selective scaling.
arXiv Detail & Related papers (2022-12-22T22:05:16Z) - Theoretical analysis and experimental validation of volume bias of soft
Dice optimized segmentation maps in the context of inherent uncertainty [6.692460499366963]
Recent segmentation methods use a differentiable surrogate metric, such as soft Dice, as part of the loss function during the learning phase.
We first briefly describe how to derive volume estimates from a segmentation that is, potentially, inherently uncertain or ambiguous.
We find that, even though soft Dice optimization leads to an improved performance with respect to the Dice score and other measures, it may introduce a volume bias for tasks with high inherent uncertainty.
arXiv Detail & Related papers (2022-11-08T11:04:52Z) - Assessing Data Efficiency in Task-Oriented Semantic Parsing [54.87705549021248]
We introduce a four-stage protocol which gives an approximate measure of how much in-domain "target" data a requires to achieve a certain quality bar.
We apply our protocol in two real-world case studies illustrating its flexibility and applicability to practitioners in task-oriented semantic parsing.
arXiv Detail & Related papers (2021-07-10T02:43:16Z) - Random Embeddings with Optimal Accuracy [0.0]
This work constructs Jonson-Lindenstrauss embeddings with best accuracy, as measured by variance, mean-squared error and exponential length distortion.
arXiv Detail & Related papers (2020-12-31T19:00:31Z) - Full Matching on Low Resolution for Disparity Estimation [84.45201205560431]
A Multistage Full Matching disparity estimation scheme (MFM) is proposed in this work.
We demonstrate that decouple all similarity scores directly from the low-resolution 4D volume step by step instead of estimating low-resolution 3D cost volume.
Experiment results demonstrate that the proposed method achieves more accurate disparity estimation results and outperforms state-of-the-art methods on Scene Flow, KITTI 2012 and KITTI 2015 datasets.
arXiv Detail & Related papers (2020-12-10T11:11:23Z) - Bayesian Bits: Unifying Quantization and Pruning [73.27732135853243]
We introduce Bayesian Bits, a practical method for joint mixed precision quantization and pruning through gradient based optimization.
We experimentally validate our proposed method on several benchmark datasets and show that we can learn pruned, mixed precision networks.
arXiv Detail & Related papers (2020-05-14T16:00:34Z) - Compositional ADAM: An Adaptive Compositional Solver [69.31447856853833]
C-ADAM is the first adaptive solver for compositional problems involving a non-linear functional nesting of expected values.
We prove that C-ADAM converges to a stationary point in $mathcalO(delta-2.25)$ with $delta$ being a precision parameter.
arXiv Detail & Related papers (2020-02-10T14:00:45Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.