Anatomy-Constrained Contrastive Learning for Synthetic Segmentation
without Ground-truth
- URL: http://arxiv.org/abs/2107.05482v1
- Date: Mon, 12 Jul 2021 14:54:04 GMT
- Title: Anatomy-Constrained Contrastive Learning for Synthetic Segmentation
without Ground-truth
- Authors: Bo Zhou, Chi Liu, James S. Duncan
- Abstract summary: We developed an anatomy-constrained contrastive synthetic segmentation network (AccSeg-Net) to train a segmentation network for a target imaging modality.
We demonstrated successful applications on CBCT, MRI, and PET imaging data, and showed superior segmentation performances as compared to previous methods.
- Score: 8.513014699605499
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: A large amount of manual segmentation is typically required to train a robust
segmentation network so that it can segment objects of interest in a new
imaging modality. The manual efforts can be alleviated if the manual
segmentation in one imaging modality (e.g., CT) can be utilized to train a
segmentation network in another imaging modality (e.g., CBCT/MRI/PET). In this
work, we developed an anatomy-constrained contrastive synthetic segmentation
network (AccSeg-Net) to train a segmentation network for a target imaging
modality without using its ground truth. Specifically, we proposed to use
anatomy-constraint and patch contrastive learning to ensure the anatomy
fidelity during the unsupervised adaptation, such that the segmentation network
can be trained on the adapted image with correct anatomical structure/content.
The training data for our AccSeg-Net consists of 1) imaging data paired with
segmentation ground-truth in source modality, and 2) unpaired source and target
modality imaging data. We demonstrated successful applications on CBCT, MRI,
and PET imaging data, and showed superior segmentation performances as compared
to previous methods.
Related papers
- Unsupervised Segmentation of Fetal Brain MRI using Deep Learning
Cascaded Registration [2.494736313545503]
Traditional deep learning-based automatic segmentation requires extensive training data with ground-truth labels.
We propose a novel method based on multi-atlas segmentation, that accurately segments multiple tissues without relying on labeled data for training.
Our method employs a cascaded deep learning network for 3D image registration, which computes small, incremental deformations to the moving image to align it precisely with the fixed image.
arXiv Detail & Related papers (2023-07-07T13:17:12Z) - Uncertainty Driven Bottleneck Attention U-net for Organ at Risk
Segmentation [20.865775626533434]
Organ at risk (OAR) segmentation in computed tomography (CT) imagery is a difficult task for automated segmentation methods.
We propose a multiple decoder U-net architecture and use the segmentation disagreement between the decoders as attention to the bottleneck of the network.
For accurate segmentation, we also proposed a CT intensity integrated regularization loss.
arXiv Detail & Related papers (2023-03-19T23:45:32Z) - Self-Supervised Correction Learning for Semi-Supervised Biomedical Image
Segmentation [84.58210297703714]
We propose a self-supervised correction learning paradigm for semi-supervised biomedical image segmentation.
We design a dual-task network, including a shared encoder and two independent decoders for segmentation and lesion region inpainting.
Experiments on three medical image segmentation datasets for different tasks demonstrate the outstanding performance of our method.
arXiv Detail & Related papers (2023-01-12T08:19:46Z) - Reliable Joint Segmentation of Retinal Edema Lesions in OCT Images [55.83984261827332]
In this paper, we propose a novel reliable multi-scale wavelet-enhanced transformer network.
We develop a novel segmentation backbone that integrates a wavelet-enhanced feature extractor network and a multi-scale transformer module.
Our proposed method achieves better segmentation accuracy with a high degree of reliability as compared to other state-of-the-art segmentation approaches.
arXiv Detail & Related papers (2022-12-01T07:32:56Z) - InDuDoNet+: A Model-Driven Interpretable Dual Domain Network for Metal
Artifact Reduction in CT Images [53.4351366246531]
We construct a novel interpretable dual domain network, termed InDuDoNet+, into which CT imaging process is finely embedded.
We analyze the CT values among different tissues, and merge the prior observations into a prior network for our InDuDoNet+, which significantly improve its generalization performance.
arXiv Detail & Related papers (2021-12-23T15:52:37Z) - Anatomy-guided Multimodal Registration by Learning Segmentation without
Ground Truth: Application to Intraprocedural CBCT/MR Liver Segmentation and
Registration [12.861503169117208]
Multimodal image registration has many applications in diagnostic medical imaging and image-guided interventions.
The ability to register peri-procedurally acquired diagnostic images into the intraprocedural environment can potentially improve the intra-procedural tumor targeting.
We propose an anatomy-preserving domain adaptation to segmentation network (APA2Seg-Net) for learning segmentation without target modality ground truth.
arXiv Detail & Related papers (2021-04-14T18:07:03Z) - Few-shot Medical Image Segmentation using a Global Correlation Network
with Discriminative Embedding [60.89561661441736]
We propose a novel method for few-shot medical image segmentation.
We construct our few-shot image segmentor using a deep convolutional network trained episodically.
We enhance discriminability of deep embedding to encourage clustering of the feature domains of the same class.
arXiv Detail & Related papers (2020-12-10T04:01:07Z) - Pairwise Relation Learning for Semi-supervised Gland Segmentation [90.45303394358493]
We propose a pairwise relation-based semi-supervised (PRS2) model for gland segmentation on histology images.
This model consists of a segmentation network (S-Net) and a pairwise relation network (PR-Net)
We evaluate our model against five recent methods on the GlaS dataset and three recent methods on the CRAG dataset.
arXiv Detail & Related papers (2020-08-06T15:02:38Z) - Learning to Segment Anatomical Structures Accurately from One Exemplar [34.287877547953194]
Methods that permit to produce accurate anatomical structure segmentation without using a large amount of fully annotated training images are highly desirable.
We propose Contour Transformer Network (CTN), a one-shot anatomy segmentor including a naturally built-in human-in-the-loop mechanism.
We demonstrate that our one-shot learning method significantly outperforms non-learning-based methods and performs competitively to the state-of-the-art fully supervised deep learning approaches.
arXiv Detail & Related papers (2020-07-06T20:27:38Z) - Pathological Retinal Region Segmentation From OCT Images Using Geometric
Relation Based Augmentation [84.7571086566595]
We propose improvements over previous GAN-based medical image synthesis methods by jointly encoding the intrinsic relationship of geometry and shape.
The proposed method outperforms state-of-the-art segmentation methods on the public RETOUCH dataset having images captured from different acquisition procedures.
arXiv Detail & Related papers (2020-03-31T11:50:43Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.