Joint Modeling of Image and Label Statistics for Enhancing Model
Generalizability of Medical Image Segmentation
- URL: http://arxiv.org/abs/2206.04336v1
- Date: Thu, 9 Jun 2022 08:31:14 GMT
- Title: Joint Modeling of Image and Label Statistics for Enhancing Model
Generalizability of Medical Image Segmentation
- Authors: Shangqi Gao, Hangqi Zhou, Yibo Gao, and Xiahai Zhuang
- Abstract summary: We propose a deep learning-based Bayesian framework, which jointly models image and label statistics.
We develop a variational Bayesian framework to infer the posterior distributions of these variables, including the contour, the basis, and the label.
Results on the task of cross-sequence cardiac MRI segmentation show that our method set a new state of the art for model generalizability.
- Score: 14.106339318764372
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Although supervised deep-learning has achieved promising performance in
medical image segmentation, many methods cannot generalize well on unseen data,
limiting their real-world applicability. To address this problem, we propose a
deep learning-based Bayesian framework, which jointly models image and label
statistics, utilizing the domain-irrelevant contour of a medical image for
segmentation. Specifically, we first decompose an image into components of
contour and basis. Then, we model the expected label as a variable only related
to the contour. Finally, we develop a variational Bayesian framework to infer
the posterior distributions of these variables, including the contour, the
basis, and the label. The framework is implemented with neural networks, thus
is referred to as deep Bayesian segmentation. Results on the task of
cross-sequence cardiac MRI segmentation show that our method set a new state of
the art for model generalizability. Particularly, the BayeSeg model trained
with LGE MRI generalized well on T2 images and outperformed other models with
great margins, i.e., over 0.47 in terms of average Dice. Our code is available
at https://zmiclab.github.io/projects.html.
Related papers
- Dual-scale Enhanced and Cross-generative Consistency Learning for Semi-supervised Medical Image Segmentation [49.57907601086494]
Medical image segmentation plays a crucial role in computer-aided diagnosis.
We propose a novel Dual-scale Enhanced and Cross-generative consistency learning framework for semi-supervised medical image (DEC-Seg)
arXiv Detail & Related papers (2023-12-26T12:56:31Z) - Promise:Prompt-driven 3D Medical Image Segmentation Using Pretrained
Image Foundation Models [13.08275555017179]
We propose ProMISe, a prompt-driven 3D medical image segmentation model using only a single point prompt.
We evaluate our model on two public datasets for colon and pancreas tumor segmentations.
arXiv Detail & Related papers (2023-10-30T16:49:03Z) - BayeSeg: Bayesian Modeling for Medical Image Segmentation with
Interpretable Generalizability [15.410162313242958]
We propose an interpretable Bayesian framework (BayeSeg) to enhance model generalizability for medical image segmentation.
Specifically, we first decompose an image into a spatial-correlated variable and a spatial-variant variable, assigning hierarchical Bayesian priors to explicitly force them to model the domain-stable shape and domain-specific appearance information respectively.
Finally, we develop a variational Bayesian framework to infer the posterior distributions of these explainable variables.
arXiv Detail & Related papers (2023-03-03T04:48:37Z) - Unsupervised Deep Learning Meets Chan-Vese Model [77.24463525356566]
We propose an unsupervised image segmentation approach that integrates the Chan-Vese (CV) model with deep neural networks.
Our basic idea is to apply a deep neural network that maps the image into a latent space to alleviate the violation of the piecewise constant assumption in image space.
arXiv Detail & Related papers (2022-04-14T13:23:57Z) - Unsupervised Domain Adaptation with Contrastive Learning for OCT
Segmentation [49.59567529191423]
We propose a novel semi-supervised learning framework for segmentation of volumetric images from new unlabeled domains.
We jointly use supervised and contrastive learning, also introducing a contrastive pairing scheme that leverages similarity between nearby slices in 3D.
arXiv Detail & Related papers (2022-03-07T19:02:26Z) - Automatic size and pose homogenization with spatial transformer network
to improve and accelerate pediatric segmentation [51.916106055115755]
We propose a new CNN architecture that is pose and scale invariant thanks to the use of Spatial Transformer Network (STN)
Our architecture is composed of three sequential modules that are estimated together during training.
We test the proposed method in kidney and renal tumor segmentation on abdominal pediatric CT scanners.
arXiv Detail & Related papers (2021-07-06T14:50:03Z) - Semantic Segmentation with Generative Models: Semi-Supervised Learning
and Strong Out-of-Domain Generalization [112.68171734288237]
We propose a novel framework for discriminative pixel-level tasks using a generative model of both images and labels.
We learn a generative adversarial network that captures the joint image-label distribution and is trained efficiently using a large set of unlabeled images.
We demonstrate strong in-domain performance compared to several baselines, and are the first to showcase extreme out-of-domain generalization.
arXiv Detail & Related papers (2021-04-12T21:41:25Z) - Modeling the Probabilistic Distribution of Unlabeled Data forOne-shot
Medical Image Segmentation [40.41161371507547]
We develop a data augmentation method for one-shot brain magnetic resonance imaging (MRI) image segmentation.
Our method exploits only one labeled MRI image (named atlas) and a few unlabeled images.
Our method outperforms the state-of-the-art one-shot medical segmentation methods.
arXiv Detail & Related papers (2021-02-03T12:28:04Z) - Group-Wise Semantic Mining for Weakly Supervised Semantic Segmentation [49.90178055521207]
This work addresses weakly supervised semantic segmentation (WSSS), with the goal of bridging the gap between image-level annotations and pixel-level segmentation.
We formulate WSSS as a novel group-wise learning task that explicitly models semantic dependencies in a group of images to estimate more reliable pseudo ground-truths.
In particular, we devise a graph neural network (GNN) for group-wise semantic mining, wherein input images are represented as graph nodes.
arXiv Detail & Related papers (2020-12-09T12:40:13Z) - Weakly-Supervised Segmentation for Disease Localization in Chest X-Ray
Images [0.0]
We propose a novel approach to the semantic segmentation of medical chest X-ray images with only image-level class labels as supervision.
We show that this approach is applicable to chest X-rays for detecting an anomalous volume of air between the lung and the chest wall.
arXiv Detail & Related papers (2020-07-01T20:48:35Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.