Learning with Explicit Shape Priors for Medical Image Segmentation
- URL: http://arxiv.org/abs/2303.17967v2
- Date: Sun, 4 Jun 2023 15:07:55 GMT
- Title: Learning with Explicit Shape Priors for Medical Image Segmentation
- Authors: Xin You, Junjun He, Jie Yang, and Yun Gu
- Abstract summary: We propose a novel shape prior module (SPM) to promote the segmentation performance of UNet-based models.
Explicit shape priors consist of global and local shape priors.
Our proposed model achieves state-of-the-art performance.
- Score: 17.110893665132423
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Medical image segmentation is a fundamental task for medical image analysis
and surgical planning. In recent years, UNet-based networks have prevailed in
the field of medical image segmentation. However, convolution-neural networks
(CNNs) suffer from limited receptive fields, which fail to model the long-range
dependency of organs or tumors. Besides, these models are heavily dependent on
the training of the final segmentation head. And existing methods can not well
address these two limitations at the same time. Hence, in our work, we proposed
a novel shape prior module (SPM), which can explicitly introduce shape priors
to promote the segmentation performance of UNet-based models. The explicit
shape priors consist of global and local shape priors. The former with coarse
shape representations provides networks with capabilities to model global
contexts. The latter with finer shape information serves as additional guidance
to boost the segmentation performance, which relieves the heavy dependence on
the learnable prototype in the segmentation head. To evaluate the effectiveness
of SPM, we conduct experiments on three challenging public datasets. And our
proposed model achieves state-of-the-art performance. Furthermore, SPM shows an
outstanding generalization ability on classic CNNs and recent Transformer-based
backbones, which can serve as a plug-and-play structure for the segmentation
task of different datasets. Source codes are available at
https://github.com/AlexYouXin/Explicit-Shape-Priors
Related papers
- Prompting Segment Anything Model with Domain-Adaptive Prototype for Generalizable Medical Image Segmentation [49.5901368256326]
We propose a novel Domain-Adaptive Prompt framework for fine-tuning the Segment Anything Model (termed as DAPSAM) in segmenting medical images.
Our DAPSAM achieves state-of-the-art performance on two medical image segmentation tasks with different modalities.
arXiv Detail & Related papers (2024-09-19T07:28:33Z) - ShapeMamba-EM: Fine-Tuning Foundation Model with Local Shape Descriptors and Mamba Blocks for 3D EM Image Segmentation [49.42525661521625]
This paper presents ShapeMamba-EM, a specialized fine-tuning method for 3D EM segmentation.
It is tested over a wide range of EM images, covering five segmentation tasks and 10 datasets.
arXiv Detail & Related papers (2024-08-26T08:59:22Z) - MAP: Domain Generalization via Meta-Learning on Anatomy-Consistent
Pseudo-Modalities [12.194439938007672]
We propose Meta learning on Anatomy-consistent Pseudo-modalities (MAP)
MAP improves model generalizability by learning structural features.
We evaluate our model on seven public datasets of various retinal imaging modalities.
arXiv Detail & Related papers (2023-09-03T22:56:22Z) - MIS-FM: 3D Medical Image Segmentation using Foundation Models Pretrained
on a Large-Scale Unannotated Dataset [14.823114726604853]
We propose a novel self-supervised learning strategy named Volume Fusion (VF) for pretraining 3D segmentation models.
VF forces the model to predict the fusion coefficient of each voxel, which is formulated as a self-supervised segmentation task without manual annotations.
experiments with different downstream segmentation targets including head and neck organs, thoracic/abdominal organs showed that our pretrained model largely outperformed training from scratch.
arXiv Detail & Related papers (2023-06-29T13:22:13Z) - Learnable Weight Initialization for Volumetric Medical Image Segmentation [66.3030435676252]
We propose a learnable weight-based hybrid medical image segmentation approach.
Our approach is easy to integrate into any hybrid model and requires no external training data.
Experiments on multi-organ and lung cancer segmentation tasks demonstrate the effectiveness of our approach.
arXiv Detail & Related papers (2023-06-15T17:55:05Z) - Learning from partially labeled data for multi-organ and tumor
segmentation [102.55303521877933]
We propose a Transformer based dynamic on-demand network (TransDoDNet) that learns to segment organs and tumors on multiple datasets.
A dynamic head enables the network to accomplish multiple segmentation tasks flexibly.
We create a large-scale partially labeled Multi-Organ and Tumor benchmark, termed MOTS, and demonstrate the superior performance of our TransDoDNet over other competitors.
arXiv Detail & Related papers (2022-11-13T13:03:09Z) - Unsupervised Domain Adaptation through Shape Modeling for Medical Image
Segmentation [23.045760366698634]
We aim at modeling shape explicitly and using it to help medical image segmentation.
Previous methods proposed Variational Autoencoder (VAE) based models to learn the distribution of shape for a particular organ.
We propose a new unsupervised domain adaptation pipeline based on a pseudo loss and a VAE reconstruction loss under a teacher-student learning paradigm.
arXiv Detail & Related papers (2022-07-06T09:16:42Z) - Distilling Ensemble of Explanations for Weakly-Supervised Pre-Training
of Image Segmentation Models [54.49581189337848]
We propose a method to enable the end-to-end pre-training for image segmentation models based on classification datasets.
The proposed method leverages a weighted segmentation learning procedure to pre-train the segmentation network en masse.
Experiment results show that, with ImageNet accompanied by PSSL as the source dataset, the proposed end-to-end pre-training strategy successfully boosts the performance of various segmentation models.
arXiv Detail & Related papers (2022-07-04T13:02:32Z) - MISSU: 3D Medical Image Segmentation via Self-distilling TransUNet [55.16833099336073]
We propose to self-distill a Transformer-based UNet for medical image segmentation.
It simultaneously learns global semantic information and local spatial-detailed features.
Our MISSU achieves the best performance over previous state-of-the-art methods.
arXiv Detail & Related papers (2022-06-02T07:38:53Z) - Shape-aware Semi-supervised 3D Semantic Segmentation for Medical Images [24.216869988183092]
We propose a shapeaware semi-supervised segmentation strategy to leverage abundant unlabeled data and to enforce a geometric shape constraint on the segmentation output.
We develop a multi-task deep network that jointly predicts semantic segmentation and signed distance mapDM) of object surfaces.
Experiments show that our method outperforms current state-of-the-art approaches with improved shape estimation.
arXiv Detail & Related papers (2020-07-21T11:44:52Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.