Unsupervised-learning-based method for chest MRI-CT transformation using
structure constrained unsupervised generative attention networks
- URL: http://arxiv.org/abs/2106.08557v1
- Date: Wed, 16 Jun 2021 05:22:27 GMT
- Title: Unsupervised-learning-based method for chest MRI-CT transformation using
structure constrained unsupervised generative attention networks
- Authors: Hidetoshi Matsuo (1), Mizuho Nishio (1), Munenobu Nogami (1), Feibi
Zeng (1), Takako Kurimoto (2), Sandeep Kaushik (3), Florian Wiesinger (3),
Atsushi K Kono (1), and Takamichi Murakami (1) ((1) Department of Radiology,
Kobe University Graduate School of Medicine, Kobe, Japan, (2) GE Healthcare,
Hino, Japan and (3) GE Healthcare, Munich, Germany)
- Abstract summary: The integrated positron emission tomography/magnetic resonance imaging (PET/MRI) scanner facilitates the simultaneous acquisition of metabolic information via PET and morphological information using MRI.
PET/MRI requires the generation of attenuation-correction maps from MRI owing to no direct relationship between the gamma-ray attenuation information and MRIs.
This paper presents a means to minimise the anatomical structural changes without human annotation by adding structural constraints using a modality-independent neighbourhood descriptor (MIND) to a generative adversarial network (GAN) that can transform unpaired images.
- Score: 0.0
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: The integrated positron emission tomography/magnetic resonance imaging
(PET/MRI) scanner facilitates the simultaneous acquisition of metabolic
information via PET and morphological information with high soft-tissue
contrast using MRI. Although PET/MRI facilitates the capture of high-accuracy
fusion images, its major drawback can be attributed to the difficulty
encountered when performing attenuation correction, which is necessary for
quantitative PET evaluation. The combined PET/MRI scanning requires the
generation of attenuation-correction maps from MRI owing to no direct
relationship between the gamma-ray attenuation information and MRIs. While
MRI-based bone-tissue segmentation can be readily performed for the head and
pelvis regions, the realization of accurate bone segmentation via chest CT
generation remains a challenging task. This can be attributed to the
respiratory and cardiac motions occurring in the chest as well as its
anatomically complicated structure and relatively thin bone cortex. This paper
presents a means to minimise the anatomical structural changes without human
annotation by adding structural constraints using a modality-independent
neighbourhood descriptor (MIND) to a generative adversarial network (GAN) that
can transform unpaired images. The results obtained in this study revealed the
proposed U-GAT-IT + MIND approach to outperform all other competing approaches.
The findings of this study hint towards possibility of synthesising clinically
acceptable CT images from chest MRI without human annotation, thereby
minimising the changes in the anatomical structure.
Related papers
- Leveraging Multimodal CycleGAN for the Generation of Anatomically Accurate Synthetic CT Scans from MRIs [1.779948689352186]
We analyse the capabilities of different configurations of Deep Learning models to generate synthetic CT scans from MRI.
Several CycleGAN models were trained unsupervised to generate CT scans from different MRI modalities with and without contrast agents.
The results show how, depending on the input modalities, the models can have very different performances.
arXiv Detail & Related papers (2024-07-15T16:38:59Z) - Joint Diffusion: Mutual Consistency-Driven Diffusion Model for PET-MRI Co-Reconstruction [19.790873500057355]
The study aims to accelerate MRI and enhance PET image quality.
Conventional approaches involve the separate reconstruction of each modality within PET-MRI systems.
We propose a novel PET-MRI joint reconstruction model employing a mutual consistency-driven diffusion mode, namely MC-Diffusion.
arXiv Detail & Related papers (2023-11-24T13:26:53Z) - Volumetric Reconstruction Resolves Off-Resonance Artifacts in Static and
Dynamic PROPELLER MRI [76.60362295758596]
Off-resonance artifacts in magnetic resonance imaging (MRI) are visual distortions that occur when the actual resonant frequencies of spins within the imaging volume differ from the expected frequencies used to encode spatial information.
We propose to resolve these artifacts by lifting the 2D MRI reconstruction problem to 3D, introducing an additional "spectral" dimension to model this off-resonance.
arXiv Detail & Related papers (2023-11-22T05:44:51Z) - Style transfer between Microscopy and Magnetic Resonance Imaging via
Generative Adversarial Network in small sample size settings [49.84018914962972]
Cross-modal augmentation of Magnetic Resonance Imaging (MRI) and microscopic imaging based on the same tissue samples is promising.
We tested a method for generating microscopic histological images from MRI scans of the corpus callosum using conditional generative adversarial network (cGAN) architecture.
arXiv Detail & Related papers (2023-10-16T13:58:53Z) - Cine cardiac MRI reconstruction using a convolutional recurrent network
with refinement [9.173298795526152]
We investigate the use of a convolutional recurrent neural network (CRNN) architecture to exploit temporal correlations in cardiac MRI reconstruction.
This is combined with a single-image super-resolution refinement module to improve single coil reconstruction by 4.4% in structural similarity and 3.9% in normalised mean square error.
The proposed model demonstrates considerable enhancements compared to the baseline case and holds promising potential for further improving cardiac MRI reconstruction.
arXiv Detail & Related papers (2023-09-23T14:07:04Z) - DDMM-Synth: A Denoising Diffusion Model for Cross-modal Medical Image
Synthesis with Sparse-view Measurement Embedding [7.6849475214826315]
We propose a novel framework called DDMM- Synth for medical image synthesis.
It combines an MRI-guided diffusion model with a new CT measurement embedding reverse sampling scheme.
It can adjust the projection number of CT a posteriori for a particular clinical application and its modified version can even improve the results significantly for noisy cases.
arXiv Detail & Related papers (2023-03-28T07:13:11Z) - Patched Diffusion Models for Unsupervised Anomaly Detection in Brain MRI [55.78588835407174]
We propose a method that reformulates the generation task of diffusion models as a patch-based estimation of healthy brain anatomy.
We evaluate our approach on data of tumors and multiple sclerosis lesions and demonstrate a relative improvement of 25.1% compared to existing baselines.
arXiv Detail & Related papers (2023-03-07T09:40:22Z) - X-Ray2EM: Uncertainty-Aware Cross-Modality Image Reconstruction from
X-Ray to Electron Microscopy in Connectomics [55.6985304397137]
We propose an uncertainty-aware 3D reconstruction model that translates X-ray images to EM-like images with enhanced membrane segmentation quality.
This shows its potential for developing simpler, faster, and more accurate X-ray based connectomics pipelines.
arXiv Detail & Related papers (2023-03-02T00:52:41Z) - Reliable Joint Segmentation of Retinal Edema Lesions in OCT Images [55.83984261827332]
In this paper, we propose a novel reliable multi-scale wavelet-enhanced transformer network.
We develop a novel segmentation backbone that integrates a wavelet-enhanced feature extractor network and a multi-scale transformer module.
Our proposed method achieves better segmentation accuracy with a high degree of reliability as compared to other state-of-the-art segmentation approaches.
arXiv Detail & Related papers (2022-12-01T07:32:56Z) - Weakly-supervised Biomechanically-constrained CT/MRI Registration of the
Spine [72.85011943179894]
We propose a weakly-supervised deep learning framework that preserves the rigidity and the volume of each vertebra while maximizing the accuracy of the registration.
We specifically design these losses to depend only on the CT label maps since automatic vertebra segmentation in CT gives more accurate results contrary to MRI.
Our results show that adding the anatomy-aware losses increases the plausibility of the inferred transformation while keeping the accuracy untouched.
arXiv Detail & Related papers (2022-05-16T10:59:55Z) - Adipose Tissue Segmentation in Unlabeled Abdomen MRI using Cross
Modality Domain Adaptation [4.677846923899843]
Abdominal fat quantification is critical since multiple vital organs are located within this region.
In this study, we propose an algorithm based on deep learning technique(s) to automatically quantify fat tissue from MRI images.
Our method does not require supervised labeling of MR scans, instead, we utilize a cycle generative adversarial network (C-GAN) to construct a pipeline.
arXiv Detail & Related papers (2020-05-11T17:41:39Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.