Multi-Level Global Context Cross Consistency Model for Semi-Supervised
Ultrasound Image Segmentation with Diffusion Model
- URL: http://arxiv.org/abs/2305.09447v2
- Date: Wed, 17 May 2023 13:35:27 GMT
- Title: Multi-Level Global Context Cross Consistency Model for Semi-Supervised
Ultrasound Image Segmentation with Diffusion Model
- Authors: Fenghe Tang, Jianrui Ding, Lingtao Wang, Min Xian, Chunping Ning
- Abstract summary: We propose a framework that uses images generated by a Latent Diffusion Model (LDM) as unlabeled images for semi-supervised learning.
Our approach enables the effective transfer of probability distribution knowledge to the segmentation network, resulting in improved segmentation accuracy.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Medical image segmentation is a critical step in computer-aided diagnosis,
and convolutional neural networks are popular segmentation networks nowadays.
However, the inherent local operation characteristics make it difficult to
focus on the global contextual information of lesions with different positions,
shapes, and sizes. Semi-supervised learning can be used to learn from both
labeled and unlabeled samples, alleviating the burden of manual labeling.
However, obtaining a large number of unlabeled images in medical scenarios
remains challenging. To address these issues, we propose a Multi-level Global
Context Cross-consistency (MGCC) framework that uses images generated by a
Latent Diffusion Model (LDM) as unlabeled images for semi-supervised learning.
The framework involves of two stages. In the first stage, a LDM is used to
generate synthetic medical images, which reduces the workload of data
annotation and addresses privacy concerns associated with collecting medical
data. In the second stage, varying levels of global context noise perturbation
are added to the input of the auxiliary decoder, and output consistency is
maintained between decoders to improve the representation ability. Experiments
conducted on open-source breast ultrasound and private thyroid ultrasound
datasets demonstrate the effectiveness of our framework in bridging the
probability distribution and the semantic representation of the medical image.
Our approach enables the effective transfer of probability distribution
knowledge to the segmentation network, resulting in improved segmentation
accuracy. The code is available at
https://github.com/FengheTan9/Multi-Level-Global-Context-Cross-Consistency.
Related papers
- Dual-scale Enhanced and Cross-generative Consistency Learning for Semi-supervised Medical Image Segmentation [49.57907601086494]
Medical image segmentation plays a crucial role in computer-aided diagnosis.
We propose a novel Dual-scale Enhanced and Cross-generative consistency learning framework for semi-supervised medical image (DEC-Seg)
arXiv Detail & Related papers (2023-12-26T12:56:31Z) - M$^{2}$SNet: Multi-scale in Multi-scale Subtraction Network for Medical
Image Segmentation [73.10707675345253]
We propose a general multi-scale in multi-scale subtraction network (M$2$SNet) to finish diverse segmentation from medical image.
Our method performs favorably against most state-of-the-art methods under different evaluation metrics on eleven datasets of four different medical image segmentation tasks.
arXiv Detail & Related papers (2023-03-20T06:26:49Z) - Self-Supervised Correction Learning for Semi-Supervised Biomedical Image
Segmentation [84.58210297703714]
We propose a self-supervised correction learning paradigm for semi-supervised biomedical image segmentation.
We design a dual-task network, including a shared encoder and two independent decoders for segmentation and lesion region inpainting.
Experiments on three medical image segmentation datasets for different tasks demonstrate the outstanding performance of our method.
arXiv Detail & Related papers (2023-01-12T08:19:46Z) - TranSiam: Fusing Multimodal Visual Features Using Transformer for
Medical Image Segmentation [4.777011444412729]
We propose a segmentation method suitable for multimodal medical images that can capture global information.
TranSiam is a 2D dual path network that extracts features of different modalities.
On the BraTS 2019 and BraTS 2020 multimodal datasets, we have a significant improvement in accuracy over other popular methods.
arXiv Detail & Related papers (2022-04-26T09:39:10Z) - Cross-level Contrastive Learning and Consistency Constraint for
Semi-supervised Medical Image Segmentation [46.678279106837294]
We propose a cross-level constrastive learning scheme to enhance representation capacity for local features in semi-supervised medical image segmentation.
With the help of the cross-level contrastive learning and consistency constraint, the unlabelled data can be effectively explored to improve segmentation performance.
arXiv Detail & Related papers (2022-02-08T15:12:11Z) - TransAttUnet: Multi-level Attention-guided U-Net with Transformer for
Medical Image Segmentation [33.45471457058221]
This paper proposes a novel Transformer based medical image semantic segmentation framework called TransAttUnet.
In particular, we establish additional multi-scale skip connections between decoder blocks to aggregate the different semantic-scale upsampling features.
Our method consistently outperforms the state-of-the-art baselines.
arXiv Detail & Related papers (2021-07-12T09:17:06Z) - Cross-Modality Brain Tumor Segmentation via Bidirectional
Global-to-Local Unsupervised Domain Adaptation [61.01704175938995]
In this paper, we propose a novel Bidirectional Global-to-Local (BiGL) adaptation framework under a UDA scheme.
Specifically, a bidirectional image synthesis and segmentation module is proposed to segment the brain tumor.
The proposed method outperforms several state-of-the-art unsupervised domain adaptation methods by a large margin.
arXiv Detail & Related papers (2021-05-17T10:11:45Z) - Few-shot Medical Image Segmentation using a Global Correlation Network
with Discriminative Embedding [60.89561661441736]
We propose a novel method for few-shot medical image segmentation.
We construct our few-shot image segmentor using a deep convolutional network trained episodically.
We enhance discriminability of deep embedding to encourage clustering of the feature domains of the same class.
arXiv Detail & Related papers (2020-12-10T04:01:07Z) - Realistic Adversarial Data Augmentation for MR Image Segmentation [17.951034264146138]
We propose an adversarial data augmentation method for training neural networks for medical image segmentation.
Our model generates plausible and realistic signal corruptions, which models the intensity inhomogeneities caused by a common type of artefacts in MR imaging: bias field.
We show that such an approach can improve the ability generalization and robustness of models as well as provide significant improvements in low-data scenarios.
arXiv Detail & Related papers (2020-06-23T20:43:18Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.