Mitigating False Predictions In Unreasonable Body Regions
- URL: http://arxiv.org/abs/2404.15718v1
- Date: Wed, 24 Apr 2024 08:11:18 GMT
- Title: Mitigating False Predictions In Unreasonable Body Regions
- Authors: Constantin Ulrich, Catherine Knobloch, Julius C. Holzschuh, Tassilo Wald, Maximilian R. Rokuss, Maximilian Zenk, Maximilian Fischer, Michael Baumgartner, Fabian Isensee, Klaus H. Maier-Hein,
- Abstract summary: We propose a novel loss function that penalizes predictions in implausible body regions.
It is realized with a Body Part Regression model that generates axial slice positional scores.
It effectively mitigates false positive tumor predictions up to 85% and significantly enhances overall segmentation performance.
- Score: 0.921264855324451
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Despite considerable strides in developing deep learning models for 3D medical image segmentation, the challenge of effectively generalizing across diverse image distributions persists. While domain generalization is acknowledged as vital for robust application in clinical settings, the challenges stemming from training with a limited Field of View (FOV) remain unaddressed. This limitation leads to false predictions when applied to body regions beyond the FOV of the training data. In response to this problem, we propose a novel loss function that penalizes predictions in implausible body regions, applicable in both single-dataset and multi-dataset training schemes. It is realized with a Body Part Regression model that generates axial slice positional scores. Through comprehensive evaluation using a test set featuring varying FOVs, our approach demonstrates remarkable improvements in generalization capabilities. It effectively mitigates false positive tumor predictions up to 85% and significantly enhances overall segmentation performance.
Related papers
- Adapting Visual-Language Models for Generalizable Anomaly Detection in Medical Images [68.42215385041114]
This paper introduces a novel lightweight multi-level adaptation and comparison framework to repurpose the CLIP model for medical anomaly detection.
Our approach integrates multiple residual adapters into the pre-trained visual encoder, enabling a stepwise enhancement of visual features across different levels.
Our experiments on medical anomaly detection benchmarks demonstrate that our method significantly surpasses current state-of-the-art models.
arXiv Detail & Related papers (2024-03-19T09:28:19Z) - On the Out of Distribution Robustness of Foundation Models in Medical
Image Segmentation [47.95611203419802]
Foundations for vision and language, pre-trained on extensive sets of natural image and text data, have emerged as a promising approach.
We compare the generalization performance to unseen domains of various pre-trained models after being fine-tuned on the same in-distribution dataset.
We further developed a new Bayesian uncertainty estimation for frozen models and used them as an indicator to characterize the model's performance on out-of-distribution data.
arXiv Detail & Related papers (2023-11-18T14:52:10Z) - Modeling Uncertain Feature Representation for Domain Generalization [49.129544670700525]
We show that our method consistently improves the network generalization ability on multiple vision tasks.
Our methods are simple yet effective and can be readily integrated into networks without additional trainable parameters or loss constraints.
arXiv Detail & Related papers (2023-01-16T14:25:02Z) - Anatomy-guided domain adaptation for 3D in-bed human pose estimation [62.3463429269385]
3D human pose estimation is a key component of clinical monitoring systems.
We present a novel domain adaptation method, adapting a model from a labeled source to a shifted unlabeled target domain.
Our method consistently outperforms various state-of-the-art domain adaptation methods.
arXiv Detail & Related papers (2022-11-22T11:34:51Z) - Segmentation Consistency Training: Out-of-Distribution Generalization
for Medical Image Segmentation [2.0978389798793873]
Generalizability is seen as one of the major challenges in deep learning, in particular in the domain of medical imaging.
We introduce Consistency Training, a training procedure and alternative to data augmentation.
We demonstrate that Consistency Training outperforms conventional data augmentation on several out-of-distribution datasets.
arXiv Detail & Related papers (2022-05-30T20:57:15Z) - Uncertainty-Aware Adaptation for Self-Supervised 3D Human Pose
Estimation [70.32536356351706]
We introduce MRP-Net that constitutes a common deep network backbone with two output heads subscribing to two diverse configurations.
We derive suitable measures to quantify prediction uncertainty at both pose and joint level.
We present a comprehensive evaluation of the proposed approach and demonstrate state-of-the-art performance on benchmark datasets.
arXiv Detail & Related papers (2022-03-29T07:14:58Z) - How Reliable Are Out-of-Distribution Generalization Methods for Medical
Image Segmentation? [0.46023882211671957]
We evaluate OoD Generalization solutions for the problem of hippocampus segmentation in MR data using both fully- and semi-supervised training.
Only the V-REx loss stands out as it remains easy to tune, while it outperforms a standard U-Net in most cases.
arXiv Detail & Related papers (2021-09-03T10:15:44Z) - Weakly-Supervised Universal Lesion Segmentation with Regional Level Set
Loss [16.80758525711538]
We present a novel weakly-supervised universal lesion segmentation method based on the High-Resolution Network (HRNet)
AHRNet provides advanced high-resolution deep image features by involving a decoder, dual-attention and scale attention mechanisms.
Our method achieves the best performance on the publicly large-scale DeepLesion dataset and a hold-out test set.
arXiv Detail & Related papers (2021-05-03T23:33:37Z) - Shape-aware Meta-learning for Generalizing Prostate MRI Segmentation to
Unseen Domains [68.73614619875814]
We present a novel shape-aware meta-learning scheme to improve the model generalization in prostate MRI segmentation.
Experimental results show that our approach outperforms many state-of-the-art generalization methods consistently across all six settings of unseen domains.
arXiv Detail & Related papers (2020-07-04T07:56:02Z) - Progressive Adversarial Semantic Segmentation [11.323677925193438]
Deep convolutional neural networks can perform exceedingly well given full supervision.
The success of such fully-supervised models for various image analysis tasks is limited to the availability of massive amounts of labeled data.
We propose a novel end-to-end medical image segmentation model, namely Progressive Adrial Semantic (PASS)
arXiv Detail & Related papers (2020-05-08T22:48:00Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.