Multimodal Representations Learning and Adversarial Hypergraph Fusion
for Early Alzheimer's Disease Prediction
- URL: http://arxiv.org/abs/2107.09928v1
- Date: Wed, 21 Jul 2021 08:08:05 GMT
- Title: Multimodal Representations Learning and Adversarial Hypergraph Fusion
for Early Alzheimer's Disease Prediction
- Authors: Qiankun Zuo, Baiying Lei, Yanyan Shen, Yong Liu, Zhiguang Feng,
Shuqiang Wang
- Abstract summary: We propose a novel representation learning and adversarial hypergraph fusion framework for Alzheimer's disease diagnosis.
Our model achieves superior performance on Alzheimer's disease detection compared with other related models.
- Score: 30.99183477161096
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Multimodal neuroimage can provide complementary information about the
dementia, but small size of complete multimodal data limits the ability in
representation learning. Moreover, the data distribution inconsistency from
different modalities may lead to ineffective fusion, which fails to
sufficiently explore the intra-modal and inter-modal interactions and
compromises the disease diagnosis performance. To solve these problems, we
proposed a novel multimodal representation learning and adversarial hypergraph
fusion (MRL-AHF) framework for Alzheimer's disease diagnosis using complete
trimodal images. First, adversarial strategy and pre-trained model are
incorporated into the MRL to extract latent representations from multimodal
data. Then two hypergraphs are constructed from the latent representations and
the adversarial network based on graph convolution is employed to narrow the
distribution difference of hyperedge features. Finally, the hyperedge-invariant
features are fused for disease prediction by hyperedge convolution. Experiments
on the public Alzheimer's Disease Neuroimaging Initiative(ADNI) database
demonstrate that our model achieves superior performance on Alzheimer's disease
detection compared with other related models and provides a possible way to
understand the underlying mechanisms of disorder's progression by analyzing the
abnormal brain connections.
Related papers
- Diagnosing Alzheimer's Disease using Early-Late Multimodal Data Fusion
with Jacobian Maps [1.5501208213584152]
Alzheimer's disease (AD) is a prevalent and debilitating neurodegenerative disorder impacting a large aging population.
We propose an efficient early-late fusion (ELF) approach, which leverages a convolutional neural network for automated feature extraction and random forests.
To tackle the challenge of detecting subtle changes in brain volume, we transform images into the Jacobian domain (JD)
arXiv Detail & Related papers (2023-10-25T19:02:57Z) - UniBrain: Universal Brain MRI Diagnosis with Hierarchical
Knowledge-enhanced Pre-training [66.16134293168535]
We propose a hierarchical knowledge-enhanced pre-training framework for the universal brain MRI diagnosis, termed as UniBrain.
Specifically, UniBrain leverages a large-scale dataset of 24,770 imaging-report pairs from routine diagnostics.
arXiv Detail & Related papers (2023-09-13T09:22:49Z) - Multi-modal Graph Neural Network for Early Diagnosis of Alzheimer's
Disease from sMRI and PET Scans [11.420077093805382]
We propose to use graph neural networks (GNN) that are designed to deal with problems in non-Euclidean domains.
In this study, we demonstrate how brain networks can be created from sMRI or PET images.
We then present a multi-modal GNN framework where each modality has its own branch of GNN and a technique is proposed to combine the multi-modal data.
arXiv Detail & Related papers (2023-07-31T02:04:05Z) - Incomplete Multimodal Learning for Complex Brain Disorders Prediction [65.95783479249745]
We propose a new incomplete multimodal data integration approach that employs transformers and generative adversarial networks.
We apply our new method to predict cognitive degeneration and disease outcomes using the multimodal imaging genetic data from Alzheimer's Disease Neuroimaging Initiative cohort.
arXiv Detail & Related papers (2023-05-25T16:29:16Z) - Self-supervised multimodal neuroimaging yields predictive
representations for a spectrum of Alzheimer's phenotypes [27.331511924585023]
This work presents a novel multi-scale coordinated framework for learning multiple representations from multimodal neuroimaging data.
We propose a general taxonomy of informative inductive biases to capture unique and joint information in multimodal self-supervised fusion.
We show that self-supervised models reveal disorder-relevant brain regions and multimodal links without access to the labels during pre-training.
arXiv Detail & Related papers (2022-09-07T01:37:19Z) - Multi-Modal Hypergraph Diffusion Network with Dual Prior for Alzheimer
Classification [4.179845212740817]
We introduce a novel semi-supervised hypergraph learning framework for Alzheimer's disease diagnosis.
Our framework allows for higher-order relations among multi-modal imaging and non-imaging data.
We demonstrate, through our experiments, that our framework is able to outperform current techniques for Alzheimer's disease diagnosis.
arXiv Detail & Related papers (2022-04-04T10:31:42Z) - A Prior Guided Adversarial Representation Learning and Hypergraph
Perceptual Network for Predicting Abnormal Connections of Alzheimer's Disease [29.30199956567813]
Alzheimer's disease is characterized by alterations of the brain's structural and functional connectivity.
PGARL-HPN is proposed to predict abnormal brain connections using triple-modality medical images.
arXiv Detail & Related papers (2021-10-12T03:10:37Z) - Many-to-One Distribution Learning and K-Nearest Neighbor Smoothing for
Thoracic Disease Identification [83.6017225363714]
deep learning has become the most powerful computer-aided diagnosis technology for improving disease identification performance.
For chest X-ray imaging, annotating large-scale data requires professional domain knowledge and is time-consuming.
In this paper, we propose many-to-one distribution learning (MODL) and K-nearest neighbor smoothing (KNNS) methods to improve a single model's disease identification performance.
arXiv Detail & Related papers (2021-02-26T02:29:30Z) - G-MIND: An End-to-End Multimodal Imaging-Genetics Framework for
Biomarker Identification and Disease Classification [49.53651166356737]
We propose a novel deep neural network architecture to integrate imaging and genetics data, as guided by diagnosis, that provides interpretable biomarkers.
We have evaluated our model on a population study of schizophrenia that includes two functional MRI (fMRI) paradigms and Single Nucleotide Polymorphism (SNP) data.
arXiv Detail & Related papers (2021-01-27T19:28:04Z) - M2Net: Multi-modal Multi-channel Network for Overall Survival Time
Prediction of Brain Tumor Patients [151.4352001822956]
Early and accurate prediction of overall survival (OS) time can help to obtain better treatment planning for brain tumor patients.
Existing prediction methods rely on radiomic features at the local lesion area of a magnetic resonance (MR) volume.
We propose an end-to-end OS time prediction model; namely, Multi-modal Multi-channel Network (M2Net)
arXiv Detail & Related papers (2020-06-01T05:21:37Z) - A Graph Gaussian Embedding Method for Predicting Alzheimer's Disease
Progression with MEG Brain Networks [59.15734147867412]
Characterizing the subtle changes of functional brain networks associated with Alzheimer's disease (AD) is important for early diagnosis and prediction of disease progression.
We developed a new deep learning method, termed multiple graph Gaussian embedding model (MG2G)
We used MG2G to detect the intrinsic latent dimensionality of MEG brain networks, predict the progression of patients with mild cognitive impairment (MCI) to AD, and identify brain regions with network alterations related to MCI.
arXiv Detail & Related papers (2020-05-08T02:29:24Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.