MRI to PET Cross-Modality Translation using Globally and Locally Aware
GAN (GLA-GAN) for Multi-Modal Diagnosis of Alzheimer's Disease
- URL: http://arxiv.org/abs/2108.02160v1
- Date: Wed, 4 Aug 2021 16:38:33 GMT
- Title: MRI to PET Cross-Modality Translation using Globally and Locally Aware
GAN (GLA-GAN) for Multi-Modal Diagnosis of Alzheimer's Disease
- Authors: Apoorva Sikka, Skand, Jitender Singh Virk, Deepti R. Bathula
- Abstract summary: generative adversarial networks (GANs) with the ability to synthesize realist images have shown great potential as an alternative to standard data augmentation techniques.
We propose a novel end-to-end, globally and locally aware image-to-image translation GAN (GLA-GAN) with a multi-path architecture that enforces both global structural integrity and fidelity to local details.
- Score: 1.7499351967216341
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Medical imaging datasets are inherently high dimensional with large
variability and low sample sizes that limit the effectiveness of deep learning
algorithms. Recently, generative adversarial networks (GANs) with the ability
to synthesize realist images have shown great potential as an alternative to
standard data augmentation techniques. Our work focuses on cross-modality
synthesis of fluorodeoxyglucose~(FDG) Positron Emission Tomography~(PET) scans
from structural Magnetic Resonance~(MR) images using generative models to
facilitate multi-modal diagnosis of Alzheimer's disease (AD). Specifically, we
propose a novel end-to-end, globally and locally aware image-to-image
translation GAN (GLA-GAN) with a multi-path architecture that enforces both
global structural integrity and fidelity to local details. We further
supplement the standard adversarial loss with voxel-level intensity,
multi-scale structural similarity (MS-SSIM) and region-of-interest (ROI) based
loss components that reduce reconstruction error, enforce structural
consistency at different scales and perceive variation in regional sensitivity
to AD respectively. Experimental results demonstrate that our GLA-GAN not only
generates synthesized FDG-PET scans with enhanced image quality but also
superior clinical utility in improving AD diagnosis compared to
state-of-the-art models. Finally, we attempt to interpret some of the internal
units of the GAN that are closely related to this specific cross-modality
generation task.
Related papers
- GAN-Based Architecture for Low-dose Computed Tomography Imaging Denoising [1.0138723409205497]
Generative Adversarial Networks (GANs) have surfaced as a revolutionary element within the domain of low-dose computed tomography (LDCT) imaging.
This comprehensive review synthesizes the rapid advancements in GAN-based LDCT denoising techniques.
arXiv Detail & Related papers (2024-11-14T15:26:10Z) - A Unified Model for Compressed Sensing MRI Across Undersampling Patterns [69.19631302047569]
Deep neural networks have shown great potential for reconstructing high-fidelity images from undersampled measurements.
Our model is based on neural operators, a discretization-agnostic architecture.
Our inference speed is also 1,400x faster than diffusion methods.
arXiv Detail & Related papers (2024-10-05T20:03:57Z) - URCDM: Ultra-Resolution Image Synthesis in Histopathology [4.393805955844748]
Ultra-Resolution Cascaded Diffusion Models (URCDMs) are capable of synthesising entire histopathology images at high resolutions.
We evaluate our method on three separate datasets, consisting of brain, breast and kidney tissue.
URCDMs consistently generate outputs across various resolutions that trained evaluators cannot distinguish from real images.
arXiv Detail & Related papers (2024-07-18T08:31:55Z) - Applying Conditional Generative Adversarial Networks for Imaging Diagnosis [3.881664394416534]
This study introduces an innovative application of Conditional Generative Adversarial Networks (C-GAN) integrated with Stacked Hourglass Networks (SHGN)
We address the problem of overfitting, common in deep learning models applied to complex imaging datasets, by augmenting data through rotation and scaling.
A hybrid loss function combining L1 and L2 reconstruction losses, enriched with adversarial training, is introduced to refine segmentation processes in intravascular ultrasound (IVUS) imaging.
arXiv Detail & Related papers (2024-07-17T23:23:09Z) - Cross-Modal Domain Adaptation in Brain Disease Diagnosis: Maximum Mean Discrepancy-based Convolutional Neural Networks [0.0]
Brain disorders are a major challenge to global health, causing millions of deaths each year.
Accurate diagnosis of these diseases relies heavily on advanced medical imaging techniques such as MRI and CT.
The scarcity of annotated data poses a significant challenge in deploying machine learning models for medical diagnosis.
arXiv Detail & Related papers (2024-05-06T07:44:46Z) - Super-resolution of biomedical volumes with 2D supervision [84.5255884646906]
Masked slice diffusion for super-resolution exploits the inherent equivalence in the data-generating distribution across all spatial dimensions of biological specimens.
We focus on the application of SliceR to stimulated histology (SRH), characterized by its rapid acquisition of high-resolution 2D images but slow and costly optical z-sectioning.
arXiv Detail & Related papers (2024-04-15T02:41:55Z) - K-Space-Aware Cross-Modality Score for Synthesized Neuroimage Quality
Assessment [71.27193056354741]
The problem of how to assess cross-modality medical image synthesis has been largely unexplored.
We propose a new metric K-CROSS to spur progress on this challenging problem.
K-CROSS uses a pre-trained multi-modality segmentation network to predict the lesion location.
arXiv Detail & Related papers (2023-07-10T01:26:48Z) - On Sensitivity and Robustness of Normalization Schemes to Input
Distribution Shifts in Automatic MR Image Diagnosis [58.634791552376235]
Deep Learning (DL) models have achieved state-of-the-art performance in diagnosing multiple diseases using reconstructed images as input.
DL models are sensitive to varying artifacts as it leads to changes in the input data distribution between the training and testing phases.
We propose to use other normalization techniques, such as Group Normalization and Layer Normalization, to inject robustness into model performance against varying image artifacts.
arXiv Detail & Related papers (2023-06-23T03:09:03Z) - Reliable Joint Segmentation of Retinal Edema Lesions in OCT Images [55.83984261827332]
In this paper, we propose a novel reliable multi-scale wavelet-enhanced transformer network.
We develop a novel segmentation backbone that integrates a wavelet-enhanced feature extractor network and a multi-scale transformer module.
Our proposed method achieves better segmentation accuracy with a high degree of reliability as compared to other state-of-the-art segmentation approaches.
arXiv Detail & Related papers (2022-12-01T07:32:56Z) - A Self-attention Guided Multi-scale Gradient GAN for Diversified X-ray
Image Synthesis [0.6308539010172307]
Generative Adversarial Networks (GANs) are utilized to address the data limitation problem via the generation of synthetic images.
Training challenges such as mode collapse, non-convergence, and instability degrade a GAN's performance in synthesizing diversified and high-quality images.
This work proposes an attention-guided multi-scale gradient GAN architecture to model the relationship between long-range dependencies of biomedical image features.
arXiv Detail & Related papers (2022-10-09T13:17:17Z) - Multi-modal Aggregation Network for Fast MR Imaging [85.25000133194762]
We propose a novel Multi-modal Aggregation Network, named MANet, which is capable of discovering complementary representations from a fully sampled auxiliary modality.
In our MANet, the representations from the fully sampled auxiliary and undersampled target modalities are learned independently through a specific network.
Our MANet follows a hybrid domain learning framework, which allows it to simultaneously recover the frequency signal in the $k$-space domain.
arXiv Detail & Related papers (2021-10-15T13:16:59Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.