Bidirectional Mapping Generative Adversarial Networks for Brain MR to
PET Synthesis
- URL: http://arxiv.org/abs/2008.03483v1
- Date: Sat, 8 Aug 2020 09:27:48 GMT
- Title: Bidirectional Mapping Generative Adversarial Networks for Brain MR to
PET Synthesis
- Authors: Shengye Hu, Baiying Lei, Yong Wang, Zhiguang Feng, Yanyan Shen,
Shuqiang Wang
- Abstract summary: We propose a 3D end-to-end synthesis network, called Bidirectional Mapping Generative Adversarial Networks (BMGAN)
The proposed method can synthesize the perceptually realistic PET images while preserving the diverse brain structures of different subjects.
- Score: 29.40385887130174
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Fusing multi-modality medical images, such as MR and PET, can provide various
anatomical or functional information about human body. But PET data is always
unavailable due to different reasons such as cost, radiation, or other
limitations. In this paper, we propose a 3D end-to-end synthesis network,
called Bidirectional Mapping Generative Adversarial Networks (BMGAN), where
image contexts and latent vector are effectively used and jointly optimized for
brain MR-to-PET synthesis. Concretely, a bidirectional mapping mechanism is
designed to embed the semantic information of PET images into the high
dimensional latent space. And the 3D DenseU-Net generator architecture and the
extensive objective functions are further utilized to improve the visual
quality of synthetic results. The most appealing part is that the proposed
method can synthesize the perceptually realistic PET images while preserving
the diverse brain structures of different subjects. Experimental results
demonstrate that the performance of the proposed method outperforms other
competitive cross-modality synthesis methods in terms of quantitative measures,
qualitative displays, and classification evaluation.
Related papers
- Brain3D: Generating 3D Objects from fMRI [76.41771117405973]
We design a novel 3D object representation learning method, Brain3D, that takes as input the fMRI data of a subject.
We show that our model captures the distinct functionalities of each region of human vision system.
Preliminary evaluations indicate that Brain3D can successfully identify the disordered brain regions in simulated scenarios.
arXiv Detail & Related papers (2024-05-24T06:06:11Z) - Functional Imaging Constrained Diffusion for Brain PET Synthesis from Structural MRI [5.190302448685122]
We propose a framework for 3D brain PET image synthesis with paired structural MRI as input condition, through a new constrained diffusion model (CDM)
The FICD introduces noise to PET and then progressively removes it with CDM, ensuring high output fidelity throughout a stable training phase.
The CDM learns to predict denoised PET with a functional imaging constraint introduced to ensure voxel-wise alignment between each denoised PET and its ground truth.
arXiv Detail & Related papers (2024-05-03T22:33:46Z) - Psychometry: An Omnifit Model for Image Reconstruction from Human Brain Activity [60.983327742457995]
Reconstructing the viewed images from human brain activity bridges human and computer vision through the Brain-Computer Interface.
We devise Psychometry, an omnifit model for reconstructing images from functional Magnetic Resonance Imaging (fMRI) obtained from different subjects.
arXiv Detail & Related papers (2024-03-29T07:16:34Z) - SDR-Former: A Siamese Dual-Resolution Transformer for Liver Lesion
Classification Using 3D Multi-Phase Imaging [59.78761085714715]
This study proposes a novel Siamese Dual-Resolution Transformer (SDR-Former) framework for liver lesion classification.
The proposed framework has been validated through comprehensive experiments on two clinical datasets.
To support the scientific community, we are releasing our extensive multi-phase MR dataset for liver lesion analysis to the public.
arXiv Detail & Related papers (2024-02-27T06:32:56Z) - Contrastive Diffusion Model with Auxiliary Guidance for Coarse-to-Fine
PET Reconstruction [62.29541106695824]
This paper presents a coarse-to-fine PET reconstruction framework that consists of a coarse prediction module (CPM) and an iterative refinement module (IRM)
By delegating most of the computational overhead to the CPM, the overall sampling speed of our method can be significantly improved.
Two additional strategies, i.e., an auxiliary guidance strategy and a contrastive diffusion strategy, are proposed and integrated into the reconstruction process.
arXiv Detail & Related papers (2023-08-20T04:10:36Z) - Generative Adversarial Networks for Brain Images Synthesis: A Review [2.609784101826762]
In medical imaging, image synthesis is the estimation process of one image (sequence, modality) from another image (sequence, modality)
generative adversarial network (GAN) as one of the most popular generative-based deep learning methods.
We summarized the recent developments of GANs for cross-modality brain image synthesis including CT to PET, CT to MRI, MRI to PET, and vice versa.
arXiv Detail & Related papers (2023-05-16T17:28:06Z) - CG-3DSRGAN: A classification guided 3D generative adversarial network
for image quality recovery from low-dose PET images [10.994223928445589]
High radioactivity caused by the injected tracer dose is a major concern in PET imaging.
Reducing the dose leads to inadequate image quality for diagnostic practice.
CNNs-based methods have been developed for high quality PET synthesis from its low-dose counterparts.
arXiv Detail & Related papers (2023-04-03T05:39:02Z) - View-Disentangled Transformer for Brain Lesion Detection [50.4918615815066]
We propose a novel view-disentangled transformer to enhance the extraction of MRI features for more accurate tumour detection.
First, the proposed transformer harvests long-range correlation among different positions in a 3D brain scan.
Second, the transformer models a stack of slice features as multiple 2D views and enhance these features view-by-view.
Third, we deploy the proposed transformer module in a transformer backbone, which can effectively detect the 2D regions surrounding brain lesions.
arXiv Detail & Related papers (2022-09-20T11:58:23Z) - Attentive Symmetric Autoencoder for Brain MRI Segmentation [56.02577247523737]
We propose a novel Attentive Symmetric Auto-encoder based on Vision Transformer (ViT) for 3D brain MRI segmentation tasks.
In the pre-training stage, the proposed auto-encoder pays more attention to reconstruct the informative patches according to the gradient metrics.
Experimental results show that our proposed attentive symmetric auto-encoder outperforms the state-of-the-art self-supervised learning methods and medical image segmentation models.
arXiv Detail & Related papers (2022-09-19T09:43:19Z) - 3D Segmentation Guided Style-based Generative Adversarial Networks for
PET Synthesis [11.615097017030843]
Potential radioactive hazards in full-dose positron emission tomography (PET) imaging remain a concern.
It is of great interest to translate low-dose PET images into full-dose.
We propose a novel segmentation guided style-based generative adversarial network (SGSGAN) for PET synthesis.
arXiv Detail & Related papers (2022-05-18T12:19:17Z) - FastPET: Near Real-Time PET Reconstruction from Histo-Images Using a
Neural Network [0.0]
This paper proposes FastPET, a novel direct reconstruction convolutional neural network that is architecturally simple, memory space efficient.
FastPET operates on a histo-image representation of the raw data enabling it to reconstruct 3D image volumes 67x faster than Ordered subsets Expectation Maximization (OSEM)
The results show that not only are the reconstructions very fast, but the images are high quality and lower noise than iterative reconstructions.
arXiv Detail & Related papers (2020-02-11T20:32:47Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.