Multimodal Attention-based Deep Learning for Alzheimer's Disease
Diagnosis
- URL: http://arxiv.org/abs/2206.08826v1
- Date: Fri, 17 Jun 2022 15:10:00 GMT
- Title: Multimodal Attention-based Deep Learning for Alzheimer's Disease
Diagnosis
- Authors: Michal Golovanevsky, Carsten Eickhoff, and Ritambhara Singh
- Abstract summary: Alzheimer's Disease (AD) is the most common neurodegenerative disorder with one of the most complex pathogeneses.
We present a Multimodal Alzheimer's Disease Diagnosis framework (MADDi) to accurately detect the presence of AD.
- Score: 9.135911493822261
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Alzheimer's Disease (AD) is the most common neurodegenerative disorder with
one of the most complex pathogeneses, making effective and clinically
actionable decision support difficult. The objective of this study was to
develop a novel multimodal deep learning framework to aid medical professionals
in AD diagnosis. We present a Multimodal Alzheimer's Disease Diagnosis
framework (MADDi) to accurately detect the presence of AD and mild cognitive
impairment (MCI) from imaging, genetic, and clinical data. MADDi is novel in
that we use cross-modal attention, which captures interactions between
modalities - a method not previously explored in this domain. We perform
multi-class classification, a challenging task considering the strong
similarities between MCI and AD. We compare with previous state-of-the-art
models, evaluate the importance of attention, and examine the contribution of
each modality to the model's performance. MADDi classifies MCI, AD, and
controls with 96.88% accuracy on a held-out test set. When examining the
contribution of different attention schemes, we found that the combination of
cross-modal attention with self-attention performed the best, and no attention
layers in the model performed the worst, with a 7.9% difference in F1-Scores.
Our experiments underlined the importance of structured clinical data to help
machine learning models contextualize and interpret the remaining modalities.
Extensive ablation studies showed that any multimodal mixture of input features
without access to structured clinical information suffered marked performance
losses. This study demonstrates the merit of combining multiple input
modalities via cross-modal attention to deliver highly accurate AD diagnostic
decision support.
Related papers
- Class Balancing Diversity Multimodal Ensemble for Alzheimer's Disease Diagnosis and Early Detection [1.1475433903117624]
Alzheimer's disease poses significant global health challenges due to its increasing prevalence and associated societal costs.
Traditional diagnostic methods and single-modality data often fall short in identifying early-stage AD.
This study introduces a novel approach: multImodal enseMble via class BALancing diversity for iMbalancEd Data (IMBALMED)
arXiv Detail & Related papers (2024-10-14T10:56:43Z) - Towards Within-Class Variation in Alzheimer's Disease Detection from Spontaneous Speech [60.08015780474457]
Alzheimer's Disease (AD) detection has emerged as a promising research area that employs machine learning classification models.
We identify within-class variation as a critical challenge in AD detection: individuals with AD exhibit a spectrum of cognitive impairments.
We propose two novel methods: Soft Target Distillation (SoTD) and Instance-level Re-balancing (InRe), targeting two problems respectively.
arXiv Detail & Related papers (2024-09-22T02:06:05Z) - Toward Robust Early Detection of Alzheimer's Disease via an Integrated Multimodal Learning Approach [5.9091823080038814]
Alzheimer's Disease (AD) is a complex neurodegenerative disorder marked by memory loss, executive dysfunction, and personality changes.
This study introduces an advanced multimodal classification model that integrates clinical, cognitive, neuroimaging, and EEG data.
arXiv Detail & Related papers (2024-08-29T08:26:00Z) - Optimizing Skin Lesion Classification via Multimodal Data and Auxiliary
Task Integration [54.76511683427566]
This research introduces a novel multimodal method for classifying skin lesions, integrating smartphone-captured images with essential clinical and demographic information.
A distinctive aspect of this method is the integration of an auxiliary task focused on super-resolution image prediction.
The experimental evaluations have been conducted using the PAD-UFES20 dataset, applying various deep-learning architectures.
arXiv Detail & Related papers (2024-02-16T05:16:20Z) - Multimodal Identification of Alzheimer's Disease: A Review [4.6358128931887705]
Alzheimer's disease is a progressive neurological disorder characterized by cognitive impairment and memory loss.
In recent years, a considerable number of teams have applied computer-aided diagnostic techniques to early classification research of AD.
arXiv Detail & Related papers (2023-10-06T12:48:15Z) - Cross-Attention is Not Enough: Incongruity-Aware Dynamic Hierarchical
Fusion for Multimodal Affect Recognition [69.32305810128994]
Incongruity between modalities poses a challenge for multimodal fusion, especially in affect recognition.
We propose the Hierarchical Crossmodal Transformer with Dynamic Modality Gating (HCT-DMG), a lightweight incongruity-aware model.
HCT-DMG: 1) outperforms previous multimodal models with a reduced size of approximately 0.8M parameters; 2) recognizes hard samples where incongruity makes affect recognition difficult; 3) mitigates the incongruity at the latent level in crossmodal attention.
arXiv Detail & Related papers (2023-05-23T01:24:15Z) - Leveraging Pretrained Representations with Task-related Keywords for
Alzheimer's Disease Detection [69.53626024091076]
Alzheimer's disease (AD) is particularly prominent in older adults.
Recent advances in pre-trained models motivate AD detection modeling to shift from low-level features to high-level representations.
This paper presents several efficient methods to extract better AD-related cues from high-level acoustic and linguistic features.
arXiv Detail & Related papers (2023-03-14T16:03:28Z) - Tensor-Based Multi-Modality Feature Selection and Regression for
Alzheimer's Disease Diagnosis [25.958167380664083]
We propose a novel tensor-based multi-modality feature selection and regression method for diagnosis and biomarker identification of Alzheimer's Disease (AD) and Mild Cognitive Impairment (MCI)
We present the practical advantages of our method for the analysis of ADNI data using three imaging modalities.
arXiv Detail & Related papers (2022-09-23T02:17:27Z) - MEDUSA: Multi-scale Encoder-Decoder Self-Attention Deep Neural Network
Architecture for Medical Image Analysis [71.2022403915147]
We introduce MEDUSA, a multi-scale encoder-decoder self-attention mechanism tailored for medical image analysis.
We obtain state-of-the-art performance on challenging medical image analysis benchmarks including COVIDx, RSNA RICORD, and RSNA Pneumonia Challenge.
arXiv Detail & Related papers (2021-10-12T15:05:15Z) - Differential Diagnosis of Frontotemporal Dementia and Alzheimer's
Disease using Generative Adversarial Network [0.0]
Frontotemporal dementia and Alzheimer's disease are two common forms of dementia and are easily misdiagnosed as each other.
Differentiating between the two dementia types is crucial for determining disease-specific intervention and treatment.
Recent development of Deep-learning-based approaches in the field of medical image computing are delivering some of the best performance for many binary classification tasks.
arXiv Detail & Related papers (2021-09-12T22:40:50Z) - Inheritance-guided Hierarchical Assignment for Clinical Automatic
Diagnosis [50.15205065710629]
Clinical diagnosis, which aims to assign diagnosis codes for a patient based on the clinical note, plays an essential role in clinical decision-making.
We propose a novel framework to combine the inheritance-guided hierarchical assignment and co-occurrence graph propagation for clinical automatic diagnosis.
arXiv Detail & Related papers (2021-01-27T13:16:51Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.