Multi-omic Prognosis of Alzheimer's Disease with Asymmetric Cross-Modal Cross-Attention Network
- URL: http://arxiv.org/abs/2507.08855v1
- Date: Wed, 09 Jul 2025 07:12:38 GMT
- Title: Multi-omic Prognosis of Alzheimer's Disease with Asymmetric Cross-Modal Cross-Attention Network
- Authors: Yang Ming, Jiang Shi Zhong, Zhou Su Juan,
- Abstract summary: This paper proposes a novel deep learning algorithm framework to assist medical professionals in Alzheimer's Disease diagnosis.<n>By fusing medical multi-view information such as brain fluorodeoxyglucose positron emission tomography (PET), magnetic resonance imaging (MRI), genetic data, and clinical data, it can accurately detect the presence of AD.<n>The algorithm model achieves an accuracy of 94.88% on the test set.
- Score: 0.5325390073522079
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Alzheimer's Disease (AD) is an irreversible neurodegenerative disease characterized by progressive cognitive decline as its main symptom. In the research field of deep learning-assisted diagnosis of AD, traditional convolutional neural networks and simple feature concatenation methods fail to effectively utilize the complementary information between multimodal data, and the simple feature concatenation approach is prone to cause the loss of key information during the process of modal fusion. In recent years, the development of deep learning technology has brought new possibilities for solving the problem of how to effectively fuse multimodal features. This paper proposes a novel deep learning algorithm framework to assist medical professionals in AD diagnosis. By fusing medical multi-view information such as brain fluorodeoxyglucose positron emission tomography (PET), magnetic resonance imaging (MRI), genetic data, and clinical data, it can accurately detect the presence of AD, Mild Cognitive Impairment (MCI), and Cognitively Normal (CN). The innovation of the algorithm lies in the use of an asymmetric cross-modal cross-attention mechanism, which can effectively capture the key information features of the interactions between different data modal features. This paper compares the asymmetric cross-modal cross-attention mechanism with the traditional algorithm frameworks of unimodal and multimodal deep learning models for AD diagnosis, and evaluates the importance of the asymmetric cross-modal cross-attention mechanism. The algorithm model achieves an accuracy of 94.88% on the test set.
Related papers
- MRC-GAT: A Meta-Relational Copula-Based Graph Attention Network for Interpretable Multimodal Alzheimer's Disease Diagnosis [2.2399170518036913]
Alzheimer's disease (AD) is a progressive neurodegenerative condition necessitating early and precise diagnosis to provide prompt clinical management.<n>Recent studies have increasingly focused on computer-aided diagnostic models to enhance precision and reliability.<n>To overcome these limitations, the Meta-Relational Copula-Based Graph Attention Network (MRC-GAT) is proposed as an efficient multimodal model for AD classification tasks.
arXiv Detail & Related papers (2026-02-17T17:15:32Z) - R-GenIMA: Integrating Neuroimaging and Genetics with Interpretable Multimodal AI for Alzheimer's Disease Progression [63.97617759805451]
Early detection of Alzheimer's disease requires models capable of integrating macro-scale neuroanatomical alterations with micro-scale genetic susceptibility.<n>We introduce R-GenIMA, an interpretable multimodal large language model that couples a novel ROI-wise vision transformer with genetic prompting.<n>R-GenIMA achieves state-of-the-art performance in four-way classification across normal cognition, subjective memory concerns, mild cognitive impairment, and AD.
arXiv Detail & Related papers (2025-12-22T02:54:10Z) - impuTMAE: Multi-modal Transformer with Masked Pre-training for Missing Modalities Imputation in Cancer Survival Prediction [75.43342771863837]
We introduce impuTMAE, a novel transformer-based end-to-end approach with an efficient multimodal pre-training strategy.<n>It learns inter- and intra-modal interactions while simultaneously imputing missing modalities by reconstructing masked patches.<n>Our model is pre-trained on heterogeneous, incomplete data and fine-tuned for glioma survival prediction using TCGA-GBM/LGG and BraTS datasets.
arXiv Detail & Related papers (2025-08-08T10:01:16Z) - An Interpretable Multi-Plane Fusion Framework With Kolmogorov-Arnold Network Guided Attention Enhancement for Alzheimer's Disease Diagnosis [1.3401966602181168]
Alzheimer's disease (AD) is a progressive neurodegenerative disorder that severely impairs cognitive function and quality of life.<n>To overcome these limitations, we propose an innovative framework, MPF-KANSC, which integrates multi-plane fusion (MPF)<n> Experiments on the ADNI dataset confirm that the proposed MPF-KANSC achieves superior performance in AD diagnosis.
arXiv Detail & Related papers (2025-08-08T09:26:49Z) - A Novel Multimodal Framework for Early Detection of Alzheimers Disease Using Deep Learning [0.0]
Alzheimers Disease (AD) is a progressive neurodegenerative disorder that poses significant challenges in its early diagnosis.<n>Traditional diagnostic methods fall short of capturing the multifaceted nature of the disease.<n>We propose a novel framework for the early detection of AD that integrates data from three primary sources: MRI imaging, cognitive assessments, and biomarkers.
arXiv Detail & Related papers (2025-08-05T03:46:59Z) - Cross-modal Causal Intervention for Alzheimer's Disease Prediction [12.485088483891843]
We propose a visual-language causal intervention framework named Alzheimer's Disease Prediction with Cross-modal Causal Intervention.<n>Our framework implicitly eliminates confounders through causal intervention.<n> Experimental results demonstrate the outstanding performance of our method in distinguishing CN/MCI/AD cases.
arXiv Detail & Related papers (2025-07-18T14:21:24Z) - Online Multi-modal Root Cause Analysis [61.94987309148539]
Root Cause Analysis (RCA) is essential for pinpointing the root causes of failures in microservice systems.
Existing online RCA methods handle only single-modal data overlooking, complex interactions in multi-modal systems.
We introduce OCEAN, a novel online multi-modal causal structure learning method for root cause localization.
arXiv Detail & Related papers (2024-10-13T21:47:36Z) - Multi-Resolution Graph Analysis of Dynamic Brain Network for Classification of Alzheimer's Disease and Mild Cognitive Impairment [0.0]
Alzheimer's disease (AD) is a neurodegenerative disorder marked by memory loss and cognitive decline.<n>Traditional methods, such as Pearson's correlation, have been used to calculate association matrices.<n>We introduce a novel method that integrates discrete wavelet transform (DWT) and graph theory to model the dynamic behavior of brain networks.
arXiv Detail & Related papers (2024-09-06T07:26:14Z) - Cross-modality Guidance-aided Multi-modal Learning with Dual Attention
for MRI Brain Tumor Grading [47.50733518140625]
Brain tumor represents one of the most fatal cancers around the world, and is very common in children and the elderly.
We propose a novel cross-modality guidance-aided multi-modal learning with dual attention for addressing the task of MRI brain tumor grading.
arXiv Detail & Related papers (2024-01-17T07:54:49Z) - Alzheimer's Disease Prediction via Brain Structural-Functional Deep
Fusing Network [5.945843237682432]
Cross-modal transformer generative adversarial network (CT-GAN) is proposed to fuse functional and structural information.
By analyzing the generated connectivity features, the proposed model can identify AD-related brain connections.
Evaluations on the public ADNI dataset show that the proposed CT-GAN can dramatically improve prediction performance and detect AD-related brain regions effectively.
arXiv Detail & Related papers (2023-09-28T07:06:42Z) - UniBrain: Universal Brain MRI Diagnosis with Hierarchical
Knowledge-enhanced Pre-training [66.16134293168535]
We propose a hierarchical knowledge-enhanced pre-training framework for the universal brain MRI diagnosis, termed as UniBrain.
Specifically, UniBrain leverages a large-scale dataset of 24,770 imaging-report pairs from routine diagnostics.
arXiv Detail & Related papers (2023-09-13T09:22:49Z) - Incomplete Multimodal Learning for Complex Brain Disorders Prediction [65.95783479249745]
We propose a new incomplete multimodal data integration approach that employs transformers and generative adversarial networks.
We apply our new method to predict cognitive degeneration and disease outcomes using the multimodal imaging genetic data from Alzheimer's Disease Neuroimaging Initiative cohort.
arXiv Detail & Related papers (2023-05-25T16:29:16Z) - Brain Imaging-to-Graph Generation using Adversarial Hierarchical Diffusion Models for MCI Causality Analysis [44.45598796591008]
Brain imaging-to-graph generation (BIGG) framework is proposed to map functional magnetic resonance imaging (fMRI) into effective connectivity for mild cognitive impairment analysis.
The hierarchical transformers in the generator are designed to estimate the noise at multiple scales.
Evaluations of the ADNI dataset demonstrate the feasibility and efficacy of the proposed model.
arXiv Detail & Related papers (2023-05-18T06:54:56Z) - Tensor-Based Multi-Modality Feature Selection and Regression for
Alzheimer's Disease Diagnosis [25.958167380664083]
We propose a novel tensor-based multi-modality feature selection and regression method for diagnosis and biomarker identification of Alzheimer's Disease (AD) and Mild Cognitive Impairment (MCI)
We present the practical advantages of our method for the analysis of ADNI data using three imaging modalities.
arXiv Detail & Related papers (2022-09-23T02:17:27Z) - Cross-Modality Deep Feature Learning for Brain Tumor Segmentation [158.8192041981564]
This paper proposes a novel cross-modality deep feature learning framework to segment brain tumors from the multi-modality MRI data.
The core idea is to mine rich patterns across the multi-modality data to make up for the insufficient data scale.
Comprehensive experiments are conducted on the BraTS benchmarks, which show that the proposed cross-modality deep feature learning framework can effectively improve the brain tumor segmentation performance.
arXiv Detail & Related papers (2022-01-07T07:46:01Z) - A Graph Gaussian Embedding Method for Predicting Alzheimer's Disease
Progression with MEG Brain Networks [59.15734147867412]
Characterizing the subtle changes of functional brain networks associated with Alzheimer's disease (AD) is important for early diagnosis and prediction of disease progression.
We developed a new deep learning method, termed multiple graph Gaussian embedding model (MG2G)
We used MG2G to detect the intrinsic latent dimensionality of MEG brain networks, predict the progression of patients with mild cognitive impairment (MCI) to AD, and identify brain regions with network alterations related to MCI.
arXiv Detail & Related papers (2020-05-08T02:29:24Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.