A Generalized Surface Loss for Reducing the Hausdorff Distance in
Medical Imaging Segmentation
- URL: http://arxiv.org/abs/2302.03868v3
- Date: Wed, 24 Jan 2024 02:47:34 GMT
- Title: A Generalized Surface Loss for Reducing the Hausdorff Distance in
Medical Imaging Segmentation
- Authors: Adrian Celaya, Beatrice Riviere, and David Fuentes
- Abstract summary: We propose a novel loss function to minimize Hausdorff-based metrics with more desirable numerical properties than current methods.
Our loss function outperforms other losses when tested on the LiTS and BraTS datasets using the state-of-the-art nnUNet architecture.
- Score: 1.2289361708127877
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Within medical imaging segmentation, the Dice coefficient and Hausdorff-based
metrics are standard measures of success for deep learning models. However,
modern loss functions for medical image segmentation often only consider the
Dice coefficient or similar region-based metrics during training. As a result,
segmentation architectures trained over such loss functions run the risk of
achieving high accuracy for the Dice coefficient but low accuracy for
Hausdorff-based metrics. Low accuracy on Hausdorff-based metrics can be
problematic for applications such as tumor segmentation, where such benchmarks
are crucial. For example, high Dice scores accompanied by significant Hausdorff
errors could indicate that the predictions fail to detect small tumors. We
propose the Generalized Surface Loss function, a novel loss function to
minimize Hausdorff-based metrics with more desirable numerical properties than
current methods and with weighting terms for class imbalance. Our loss function
outperforms other losses when tested on the LiTS and BraTS datasets using the
state-of-the-art nnUNet architecture. These results suggest we can improve
medical imaging segmentation accuracy with our novel loss function.
Related papers
- Dimensionality Reduction and Nearest Neighbors for Improving Out-of-Distribution Detection in Medical Image Segmentation [1.2873975765521795]
This work applied the Mahalanobis distance (MD) post hoc to the bottleneck features of four Swin UNETR and nnU-net models that segmented the liver.
Images the models failed on were detected with high performance and minimal computational load.
arXiv Detail & Related papers (2024-08-05T18:24:48Z) - Reduced Jeffries-Matusita distance: A Novel Loss Function to Improve
Generalization Performance of Deep Classification Models [0.0]
We introduce a distance called Reduced Jeffries-Matusita as a loss function for training deep classification models to reduce the over-fitting issue.
The results show that the new distance measure stabilizes the training process significantly, enhances the generalization ability, and improves the performance of the models in the Accuracy and F1-score metrics.
arXiv Detail & Related papers (2024-03-13T10:51:38Z) - Revisiting Evaluation Metrics for Semantic Segmentation: Optimization
and Evaluation of Fine-grained Intersection over Union [113.20223082664681]
We propose the use of fine-grained mIoUs along with corresponding worst-case metrics.
These fine-grained metrics offer less bias towards large objects, richer statistical information, and valuable insights into model and dataset auditing.
Our benchmark study highlights the necessity of not basing evaluations on a single metric and confirms that fine-grained mIoUs reduce the bias towards large objects.
arXiv Detail & Related papers (2023-10-30T03:45:15Z) - Bridging Precision and Confidence: A Train-Time Loss for Calibrating
Object Detection [58.789823426981044]
We propose a novel auxiliary loss formulation that aims to align the class confidence of bounding boxes with the accurateness of predictions.
Our results reveal that our train-time loss surpasses strong calibration baselines in reducing calibration error for both in and out-domain scenarios.
arXiv Detail & Related papers (2023-03-25T08:56:21Z) - Compound Batch Normalization for Long-tailed Image Classification [77.42829178064807]
We propose a compound batch normalization method based on a Gaussian mixture.
It can model the feature space more comprehensively and reduce the dominance of head classes.
The proposed method outperforms existing methods on long-tailed image classification.
arXiv Detail & Related papers (2022-12-02T07:31:39Z) - Impact of loss function in Deep Learning methods for accurate retinal
vessel segmentation [1.1470070927586016]
We compare Binary Cross Entropy, Dice, Tversky, and Combo loss using the deep learning architectures (i.e. U-Net, Attention U-Net, and Nested UNet) with the DRIVE dataset.
The results showed that there is a significant difference in the selection of loss function.
arXiv Detail & Related papers (2022-06-01T14:47:18Z) - blob loss: instance imbalance aware loss functions for semantic
segmentation [6.2334511723202]
We propose a novel family of loss functions, emphblob loss, aimed at maximizing instance-level detection metrics.
We extensively evaluate a DSC-based emphblob loss in five complex 3D semantic segmentation tasks.
arXiv Detail & Related papers (2022-05-17T10:13:27Z) - Shaping Deep Feature Space towards Gaussian Mixture for Visual
Classification [74.48695037007306]
We propose a Gaussian mixture (GM) loss function for deep neural networks for visual classification.
With a classification margin and a likelihood regularization, the GM loss facilitates both high classification performance and accurate modeling of the feature distribution.
The proposed model can be implemented easily and efficiently without using extra trainable parameters.
arXiv Detail & Related papers (2020-11-18T03:32:27Z) - Optimization for Medical Image Segmentation: Theory and Practice when
evaluating with Dice Score or Jaccard Index [25.04858968806884]
We investigate the relation within the group of metric-sensitive loss functions.
We find that the Dice score and Jaccard index approximate each other relatively and absolutely.
We verify these results empirically in an extensive validation on six medical segmentation tasks.
arXiv Detail & Related papers (2020-10-26T11:45:55Z) - Auto Seg-Loss: Searching Metric Surrogates for Semantic Segmentation [56.343646789922545]
We propose to automate the design of metric-specific loss functions by searching differentiable surrogate losses for each metric.
Experiments on PASCAL VOC and Cityscapes demonstrate that the searched surrogate losses outperform the manually designed loss functions consistently.
arXiv Detail & Related papers (2020-10-15T17:59:08Z) - Collaborative Boundary-aware Context Encoding Networks for Error Map
Prediction [65.44752447868626]
We propose collaborative boundaryaware context encoding networks called AEP-Net for error prediction task.
Specifically, we propose a collaborative feature transformation branch for better feature fusion between images and masks, and precise localization of error regions.
The AEP-Net achieves an average DSC of 0.8358, 0.8164 for error prediction task, and shows a high Pearson correlation coefficient of 0.9873.
arXiv Detail & Related papers (2020-06-25T12:42:01Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.