Certification of Deep Learning Models for Medical Image Segmentation
- URL: http://arxiv.org/abs/2310.03664v1
- Date: Thu, 5 Oct 2023 16:40:33 GMT
- Title: Certification of Deep Learning Models for Medical Image Segmentation
- Authors: Othmane Laousy, Alexandre Araujo, Guillaume Chassagnon, Nikos
Paragios, Marie-Pierre Revel, Maria Vakalopoulou
- Abstract summary: We present for the first time a certified segmentation baseline for medical imaging based on randomized smoothing and diffusion models.
Our results show that leveraging the power of denoising diffusion probabilistic models helps us overcome the limits of randomized smoothing.
- Score: 44.177565298565966
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: In medical imaging, segmentation models have known a significant improvement
in the past decade and are now used daily in clinical practice. However,
similar to classification models, segmentation models are affected by
adversarial attacks. In a safety-critical field like healthcare, certifying
model predictions is of the utmost importance. Randomized smoothing has been
introduced lately and provides a framework to certify models and obtain
theoretical guarantees. In this paper, we present for the first time a
certified segmentation baseline for medical imaging based on randomized
smoothing and diffusion models. Our results show that leveraging the power of
denoising diffusion probabilistic models helps us overcome the limits of
randomized smoothing. We conduct extensive experiments on five public datasets
of chest X-rays, skin lesions, and colonoscopies, and empirically show that we
are able to maintain high certified Dice scores even for highly perturbed
images. Our work represents the first attempt to certify medical image
segmentation models, and we aspire for it to set a foundation for future
benchmarks in this crucial and largely uncharted area.
Related papers
- On the Out of Distribution Robustness of Foundation Models in Medical
Image Segmentation [47.95611203419802]
Foundations for vision and language, pre-trained on extensive sets of natural image and text data, have emerged as a promising approach.
We compare the generalization performance to unseen domains of various pre-trained models after being fine-tuned on the same in-distribution dataset.
We further developed a new Bayesian uncertainty estimation for frozen models and used them as an indicator to characterize the model's performance on out-of-distribution data.
arXiv Detail & Related papers (2023-11-18T14:52:10Z) - From CNN to Transformer: A Review of Medical Image Segmentation Models [7.3150850275578145]
Deep learning for medical image segmentation has become a prevalent trend.
In this paper, we conduct a survey of the most representative four medical image segmentation models in recent years.
We theoretically analyze the characteristics of these models and quantitatively evaluate their performance on two benchmark datasets.
arXiv Detail & Related papers (2023-08-10T02:48:57Z) - Empirical Analysis of a Segmentation Foundation Model in Prostate
Imaging [9.99042549094606]
We consider a recently developed foundation model for medical image segmentation, UniverSeg.
We conduct an empirical evaluation study in the context of prostate imaging and compare it against the conventional approach of training a task-specific segmentation model.
arXiv Detail & Related papers (2023-07-06T20:00:52Z) - Towards Better Certified Segmentation via Diffusion Models [62.21617614504225]
segmentation models can be vulnerable to adversarial perturbations, which hinders their use in critical-decision systems like healthcare or autonomous driving.
Recently, randomized smoothing has been proposed to certify segmentation predictions by adding Gaussian noise to the input to obtain theoretical guarantees.
In this paper, we address the problem of certifying segmentation prediction using a combination of randomized smoothing and diffusion models.
arXiv Detail & Related papers (2023-06-16T16:30:39Z) - Ambiguous Medical Image Segmentation using Diffusion Models [60.378180265885945]
We introduce a single diffusion model-based approach that produces multiple plausible outputs by learning a distribution over group insights.
Our proposed model generates a distribution of segmentation masks by leveraging the inherent sampling process of diffusion.
Comprehensive results show that our proposed approach outperforms existing state-of-the-art ambiguous segmentation networks.
arXiv Detail & Related papers (2023-04-10T17:58:22Z) - Analysing the effectiveness of a generative model for semi-supervised
medical image segmentation [23.898954721893855]
State-of-the-art in automated segmentation remains supervised learning, employing discriminative models such as U-Net.
Semi-supervised learning (SSL) attempts to leverage the abundance of unlabelled data to obtain more robust and reliable models.
Deep generative models such as the SemanticGAN are truly viable alternatives to tackle challenging medical image segmentation problems.
arXiv Detail & Related papers (2022-11-03T15:19:59Z) - Adapting Pretrained Vision-Language Foundational Models to Medical
Imaging Domains [3.8137985834223502]
Building generative models for medical images that faithfully depict clinical context may help alleviate the paucity of healthcare datasets.
We explore the sub-components of the Stable Diffusion pipeline to fine-tune the model to generate medical images.
Our best-performing model improves upon the stable diffusion baseline and can be conditioned to insert a realistic-looking abnormality on a synthetic radiology image.
arXiv Detail & Related papers (2022-10-09T01:43:08Z) - Malignancy Prediction and Lesion Identification from Clinical
Dermatological Images [65.1629311281062]
We consider machine-learning-based malignancy prediction and lesion identification from clinical dermatological images.
We first identify all lesions present in the image regardless of sub-type or likelihood of malignancy, then it estimates their likelihood of malignancy, and through aggregation, it also generates an image-level likelihood of malignancy.
arXiv Detail & Related papers (2021-04-02T20:52:05Z) - Many-to-One Distribution Learning and K-Nearest Neighbor Smoothing for
Thoracic Disease Identification [83.6017225363714]
deep learning has become the most powerful computer-aided diagnosis technology for improving disease identification performance.
For chest X-ray imaging, annotating large-scale data requires professional domain knowledge and is time-consuming.
In this paper, we propose many-to-one distribution learning (MODL) and K-nearest neighbor smoothing (KNNS) methods to improve a single model's disease identification performance.
arXiv Detail & Related papers (2021-02-26T02:29:30Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.