Adipose Tissue Segmentation in Unlabeled Abdomen MRI using Cross
Modality Domain Adaptation
- URL: http://arxiv.org/abs/2005.05761v1
- Date: Mon, 11 May 2020 17:41:39 GMT
- Title: Adipose Tissue Segmentation in Unlabeled Abdomen MRI using Cross
Modality Domain Adaptation
- Authors: Samira Masoudi, Syed M. Anwar, Stephanie A. Harmon, Peter L. Choyke,
Baris Turkbey, Ulas Bagci
- Abstract summary: Abdominal fat quantification is critical since multiple vital organs are located within this region.
In this study, we propose an algorithm based on deep learning technique(s) to automatically quantify fat tissue from MRI images.
Our method does not require supervised labeling of MR scans, instead, we utilize a cycle generative adversarial network (C-GAN) to construct a pipeline.
- Score: 4.677846923899843
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Abdominal fat quantification is critical since multiple vital organs are
located within this region. Although computed tomography (CT) is a highly
sensitive modality to segment body fat, it involves ionizing radiations which
makes magnetic resonance imaging (MRI) a preferable alternative for this
purpose. Additionally, the superior soft tissue contrast in MRI could lead to
more accurate results. Yet, it is highly labor intensive to segment fat in MRI
scans. In this study, we propose an algorithm based on deep learning
technique(s) to automatically quantify fat tissue from MR images through a
cross modality adaptation. Our method does not require supervised labeling of
MR scans, instead, we utilize a cycle generative adversarial network (C-GAN) to
construct a pipeline that transforms the existing MR scans into their
equivalent synthetic CT (s-CT) images where fat segmentation is relatively
easier due to the descriptive nature of HU (hounsfield unit) in CT images. The
fat segmentation results for MRI scans were evaluated by expert radiologist.
Qualitative evaluation of our segmentation results shows average success score
of 3.80/5 and 4.54/5 for visceral and subcutaneous fat segmentation in MR
images.
Related papers
- Enhanced Synthetic MRI Generation from CT Scans Using CycleGAN with
Feature Extraction [3.2088888904556123]
We propose an approach for enhanced monomodal registration using synthetic MRI images from CT scans.
Our methodology shows promising results, outperforming several state-of-the-art methods.
arXiv Detail & Related papers (2023-10-31T16:39:56Z) - Style transfer between Microscopy and Magnetic Resonance Imaging via
Generative Adversarial Network in small sample size settings [49.84018914962972]
Cross-modal augmentation of Magnetic Resonance Imaging (MRI) and microscopic imaging based on the same tissue samples is promising.
We tested a method for generating microscopic histological images from MRI scans of the corpus callosum using conditional generative adversarial network (cGAN) architecture.
arXiv Detail & Related papers (2023-10-16T13:58:53Z) - Single Slice Thigh CT Muscle Group Segmentation with Domain Adaptation
and Self-Training [19.86796625044402]
We propose an unsupervised domain adaptation pipeline with self-training to transfer labels from 3D MR to single CT slice.
On 152 withheld single CT thigh images, the proposed pipeline achieved a mean Dice of 0.888(0.041) across all muscle groups.
arXiv Detail & Related papers (2022-11-30T19:04:17Z) - Weakly-supervised Biomechanically-constrained CT/MRI Registration of the
Spine [72.85011943179894]
We propose a weakly-supervised deep learning framework that preserves the rigidity and the volume of each vertebra while maximizing the accuracy of the registration.
We specifically design these losses to depend only on the CT label maps since automatic vertebra segmentation in CT gives more accurate results contrary to MRI.
Our results show that adding the anatomy-aware losses increases the plausibility of the inferred transformation while keeping the accuracy untouched.
arXiv Detail & Related papers (2022-05-16T10:59:55Z) - Unsupervised-learning-based method for chest MRI-CT transformation using
structure constrained unsupervised generative attention networks [0.0]
The integrated positron emission tomography/magnetic resonance imaging (PET/MRI) scanner facilitates the simultaneous acquisition of metabolic information via PET and morphological information using MRI.
PET/MRI requires the generation of attenuation-correction maps from MRI owing to no direct relationship between the gamma-ray attenuation information and MRIs.
This paper presents a means to minimise the anatomical structural changes without human annotation by adding structural constraints using a modality-independent neighbourhood descriptor (MIND) to a generative adversarial network (GAN) that can transform unpaired images.
arXiv Detail & Related papers (2021-06-16T05:22:27Z) - ShuffleUNet: Super resolution of diffusion-weighted MRIs using deep
learning [47.68307909984442]
Single Image Super-Resolution (SISR) is a technique aimed to obtain high-resolution (HR) details from one single low-resolution input image.
Deep learning extracts prior knowledge from big datasets and produces superior MRI images from the low-resolution counterparts.
arXiv Detail & Related papers (2021-02-25T14:52:23Z) - Dual-cycle Constrained Bijective VAE-GAN For Tagged-to-Cine Magnetic
Resonance Image Synthesis [11.697141493937021]
We propose a novel VAE-GAN approach to carry out tagged-to-cine MR image synthesis.
Our framework has been trained, validated, and tested using 1,768, 416, and 1,560 subject-independent paired slices of tagged and cine MRI.
arXiv Detail & Related papers (2021-01-14T03:27:16Z) - High Tissue Contrast MRI Synthesis Using Multi-Stage Attention-GAN for
Glioma Segmentation [25.408175460840802]
This paper demonstrates the potential benefits of image-to-image translation techniques to generate synthetic high tissue contrast (HTC) images.
We adopt a new cycle generative adversarial network (CycleGAN) with an attention mechanism to increase the contrast within underlying tissues.
We show the application of our method for synthesizing HTC images on brain MR scans, including glioma tumor.
arXiv Detail & Related papers (2020-06-09T03:21:30Z) - Segmentation of the Myocardium on Late-Gadolinium Enhanced MRI based on
2.5 D Residual Squeeze and Excitation Deep Learning Model [55.09533240649176]
The aim of this work is to develop an accurate automatic segmentation method based on deep learning models for the myocardial borders on LGE-MRI.
A total number of 320 exams (with a mean number of 6 slices per exam) were used for training and 28 exams used for testing.
The performance analysis of the proposed ensemble model in the basal and middle slices was similar as compared to intra-observer study and slightly lower at apical slices.
arXiv Detail & Related papers (2020-05-27T20:44:38Z) - Synergistic Learning of Lung Lobe Segmentation and Hierarchical
Multi-Instance Classification for Automated Severity Assessment of COVID-19
in CT Images [61.862364277007934]
We propose a synergistic learning framework for automated severity assessment of COVID-19 in 3D CT images.
A multi-task deep network (called M$2$UNet) is then developed to assess the severity of COVID-19 patients.
Our M$2$UNet consists of a patch-level encoder, a segmentation sub-network for lung lobe segmentation, and a classification sub-network for severity assessment.
arXiv Detail & Related papers (2020-05-08T03:16:15Z) - A Global Benchmark of Algorithms for Segmenting Late Gadolinium-Enhanced
Cardiac Magnetic Resonance Imaging [90.29017019187282]
" 2018 Left Atrium Challenge" using 154 3D LGE-MRIs, currently the world's largest cardiac LGE-MRI dataset.
Analyse of the submitted algorithms using technical and biological metrics was performed.
Results show the top method achieved a dice score of 93.2% and a mean surface to a surface distance of 0.7 mm.
arXiv Detail & Related papers (2020-04-26T08:49:17Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.