An interpretable generative multimodal neuroimaging-genomics framework for decoding Alzheimer's disease
- URL: http://arxiv.org/abs/2406.13292v2
- Date: Thu, 14 Nov 2024 10:20:53 GMT
- Title: An interpretable generative multimodal neuroimaging-genomics framework for decoding Alzheimer's disease
- Authors: Giorgio Dolci, Federica Cruciani, Md Abdur Rahaman, Anees Abrol, Jiayu Chen, Zening Fu, Ilaria Boscolo Galazzo, Gloria Menegaz, Vince D. Calhoun,
- Abstract summary: Alzheimer's disease (AD) is the most prevalent form of dementia with a progressive decline in cognitive abilities.
We leveraged structural and functional MRI to investigate the disease-induced GM and functional network connectivity changes.
We propose a novel DL-based classification framework where a generative module employing Cycle GAN was adopted for imputing missing data.
- Score: 13.213387075528017
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Alzheimer's disease (AD) is the most prevalent form of dementia with a progressive decline in cognitive abilities. The AD continuum encompasses a prodromal stage known as MCI, where patients may either progress to AD (MCIc) or remain stable (MCInc). Understanding AD mechanisms requires complementary analyses relying on different data sources, leading to the development of multimodal DL models. We leveraged structural and functional MRI to investigate the disease-induced GM and functional network connectivity changes. Moreover, considering AD's strong genetic component, we introduced SNPs as a third channel. Missing one or more modalities is a typical concern of multimodal methods. We hence propose a novel DL-based classification framework where a generative module employing Cycle GAN was adopted for imputing missing data in the latent space. Additionally, we adopted an XAI method, Integrated Gradients, to extract features' relevance, enhancing our understanding of the learned representations. Two tasks were addressed: AD detection and MCI conversion prediction. Experimental results showed that our framework reached the SOA in the classification of CN/AD with an average test accuracy of $0.926\pm0.02$. For the MCInc/MCIc task, we achieved an average prediction accuracy of $0.711\pm0.01$ using the pre-trained model for CN and AD. The interpretability analysis revealed that significant GM modulations led the classification performance in cortical and subcortical brain areas well known for their association with AD. Impairments in sensory-motor and visual functional network connectivity along AD, as well as mutations in SNPs defining biological processes linked to endocytosis, amyloid-beta, and cholesterol, were identified as contributors to the results. Overall, our integrative DL model shows promise for AD detection and MCI prediction, while shading light on important biological insights.
Related papers
- MIRROR: Multi-Modal Pathological Self-Supervised Representation Learning via Modality Alignment and Retention [52.106879463828044]
Histopathology and transcriptomics are fundamental modalities in oncology, encapsulating the morphological and molecular aspects of the disease.
We present MIRROR, a novel multi-modal representation learning method designed to foster both modality alignment and retention.
Extensive evaluations on TCGA cohorts for cancer subtyping and survival analysis highlight MIRROR's superior performance.
arXiv Detail & Related papers (2025-03-01T07:02:30Z) - ADAM-1: AI and Bioinformatics for Alzheimer's Detection and Microbiome-Clinical Data Integrations [4.426051635422496]
The Alzheimer's Disease Analysis Model Generation 1 (ADAM) is a multi-agent large language model (LLM) framework designed to integrate and analyze multi-modal data.
ADAM-1 synthesizes insights from diverse data sources and contextualizes findings using literature-driven evidence.
arXiv Detail & Related papers (2025-01-14T18:56:33Z) - Multimodal Outer Arithmetic Block Dual Fusion of Whole Slide Images and Omics Data for Precision Oncology [6.418265127069878]
We propose the use of omic embeddings during early and late fusion to capture complementary information from local (patch-level) to global (slide-level) interactions.
This dual fusion strategy enhances interpretability and classification performance, highlighting its potential for clinical diagnostics.
arXiv Detail & Related papers (2024-11-26T13:25:53Z) - Towards Within-Class Variation in Alzheimer's Disease Detection from Spontaneous Speech [60.08015780474457]
Alzheimer's Disease (AD) detection has emerged as a promising research area that employs machine learning classification models.
We identify within-class variation as a critical challenge in AD detection: individuals with AD exhibit a spectrum of cognitive impairments.
We propose two novel methods: Soft Target Distillation (SoTD) and Instance-level Re-balancing (InRe), targeting two problems respectively.
arXiv Detail & Related papers (2024-09-22T02:06:05Z) - MLC-GCN: Multi-Level Generated Connectome Based GCN for AD Analysis [14.541273450756128]
Alzheimers Disease (AD) is a currently incurable neurodegeneartive disease.
Alzheimer's Disease (AD) is a currently incurable neurodegeneartive disease.
arXiv Detail & Related papers (2024-08-06T14:18:36Z) - GFE-Mamba: Mamba-based AD Multi-modal Progression Assessment via Generative Feature Extraction from MCI [5.355943545567233]
Alzheimer's Disease (AD) is an irreversible neurodegenerative disorder that often progresses from Mild Cognitive Impairment (MCI)
We introduce GFE-Mamba, a classifier based on Generative Feature Extraction (GFE)
It integrates data from assessment scales, MRI, and PET, enabling deeper multimodal fusion.
Our experimental results demonstrate that the GFE-Mamba model is effective in predicting the conversion from MCI to AD.
arXiv Detail & Related papers (2024-07-22T15:22:33Z) - AXIAL: Attention-based eXplainability for Interpretable Alzheimer's Localized Diagnosis using 2D CNNs on 3D MRI brain scans [43.06293430764841]
This study presents an innovative method for Alzheimer's disease diagnosis using 3D MRI designed to enhance the explainability of model decisions.
Our approach adopts a soft attention mechanism, enabling 2D CNNs to extract volumetric representations.
With voxel-level precision, our method identified which specific areas are being paid attention to, identifying these predominant brain regions.
arXiv Detail & Related papers (2024-07-02T16:44:00Z) - Cross-Modality Translation with Generative Adversarial Networks to Unveil Alzheimer's Disease Biomarkers [13.798027995003908]
Generative approaches for cross-modality transformation have recently gained significant attention in neuroimaging.
We employed a cycle-GAN to synthesize data in an unpaired data transition and enhanced the transition by integrating weak supervision in cases where paired data were available.
Our findings revealed that our model could offer remarkable capability, achieving a structural similarity index measure (SSIM) of $0.89 pm 0.003$ for T1s and a correlation of 0.71 pm 0.004$ for FNCs.
arXiv Detail & Related papers (2024-05-08T23:38:02Z) - Brain Imaging-to-Graph Generation using Adversarial Hierarchical Diffusion Models for MCI Causality Analysis [44.45598796591008]
Brain imaging-to-graph generation (BIGG) framework is proposed to map functional magnetic resonance imaging (fMRI) into effective connectivity for mild cognitive impairment analysis.
The hierarchical transformers in the generator are designed to estimate the noise at multiple scales.
Evaluations of the ADNI dataset demonstrate the feasibility and efficacy of the proposed model.
arXiv Detail & Related papers (2023-05-18T06:54:56Z) - Interpretable Weighted Siamese Network to Predict the Time to Onset of
Alzheimer's Disease from MRI Images [5.10606091329134]
We re-frame brain image classification as an ordinal classification task to predict how close a patient is to the severe AD stage.
We select progressive MCI patients from the Alzheimer's Disease Neuroimaging Initiative dataset.
We train a Siamese network model to predict the time to onset of AD based on MRI brain images.
arXiv Detail & Related papers (2023-04-14T12:36:43Z) - Cross-Modal Causal Intervention for Medical Report Generation [109.83549148448469]
Medical report generation (MRG) is essential for computer-aided diagnosis and medication guidance.
Due to the spurious correlations within image-text data induced by visual and linguistic biases, it is challenging to generate accurate reports reliably describing lesion areas.
We propose a novel Visual-Linguistic Causal Intervention (VLCI) framework for MRG, which consists of a visual deconfounding module (VDM) and a linguistic deconfounding module (LDM)
arXiv Detail & Related papers (2023-03-16T07:23:55Z) - Pathology Steered Stratification Network for Subtype Identification in
Alzheimer's Disease [7.594681424335177]
Alzheimers disease (AD) is a heterogeneous, multitemporal neurodegenerative disorder characterized by beta-amyloid, pathologic tau, and neurodegeneration.
We propose a novel pathology steered stratification network (PSSN) that incorporates established domain knowledge in AD pathology through a reaction-diffusion model.
arXiv Detail & Related papers (2022-10-12T02:52:00Z) - Morphological feature visualization of Alzheimer's disease via
Multidirectional Perception GAN [40.50404819220093]
A novel Multidirectional Perception Generative Adversarial Network (MP-GAN) is proposed to visualize the morphological features indicating the severity of Alzheimer's disease (AD)
MP-GAN achieves superior performance compared with the existing methods.
arXiv Detail & Related papers (2021-11-25T03:24:52Z) - An explainable two-dimensional single model deep learning approach for
Alzheimer's disease diagnosis and brain atrophy localization [3.9281410693767036]
We propose an end-to-end deep learning approach for automated diagnosis of Alzheimer's disease (AD) and localization of important brain regions related to the disease from sMRI data.
Our approach has been evaluated on two publicly accessible datasets for two classification tasks of AD vs. cognitively normal (CN) and progressive MCI (pMCI) vs. stable MCI (sMCI)
The experimental results indicate that our approach outperforms the state-of-the-art approaches, including those using multi-model and 3D CNN methods.
arXiv Detail & Related papers (2021-07-28T07:19:00Z) - Multimodal Representations Learning and Adversarial Hypergraph Fusion
for Early Alzheimer's Disease Prediction [30.99183477161096]
We propose a novel representation learning and adversarial hypergraph fusion framework for Alzheimer's disease diagnosis.
Our model achieves superior performance on Alzheimer's disease detection compared with other related models.
arXiv Detail & Related papers (2021-07-21T08:08:05Z) - MIMO: Mutual Integration of Patient Journey and Medical Ontology for
Healthcare Representation Learning [49.57261599776167]
We propose an end-to-end robust Transformer-based solution, Mutual Integration of patient journey and Medical Ontology (MIMO) for healthcare representation learning and predictive analytics.
arXiv Detail & Related papers (2021-07-20T07:04:52Z) - G-MIND: An End-to-End Multimodal Imaging-Genetics Framework for
Biomarker Identification and Disease Classification [49.53651166356737]
We propose a novel deep neural network architecture to integrate imaging and genetics data, as guided by diagnosis, that provides interpretable biomarkers.
We have evaluated our model on a population study of schizophrenia that includes two functional MRI (fMRI) paradigms and Single Nucleotide Polymorphism (SNP) data.
arXiv Detail & Related papers (2021-01-27T19:28:04Z) - Interpretable multimodal fusion networks reveal mechanisms of brain
cognition [26.954460880062506]
We develop an interpretable multimodal fusion model, gCAM-CCL, which can perform automated diagnosis and result interpretation simultaneously.
We validate the gCAM-CCL model on a brain imaging-genetic study, and show gCAM-CCL's performed well for both classification and mechanism analysis.
arXiv Detail & Related papers (2020-06-16T18:52:50Z) - A Graph Gaussian Embedding Method for Predicting Alzheimer's Disease
Progression with MEG Brain Networks [59.15734147867412]
Characterizing the subtle changes of functional brain networks associated with Alzheimer's disease (AD) is important for early diagnosis and prediction of disease progression.
We developed a new deep learning method, termed multiple graph Gaussian embedding model (MG2G)
We used MG2G to detect the intrinsic latent dimensionality of MEG brain networks, predict the progression of patients with mild cognitive impairment (MCI) to AD, and identify brain regions with network alterations related to MCI.
arXiv Detail & Related papers (2020-05-08T02:29:24Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.