MRI-based Alzheimer's disease prediction via distilling the knowledge in
multi-modal data
- URL: http://arxiv.org/abs/2104.03618v1
- Date: Thu, 8 Apr 2021 09:06:39 GMT
- Title: MRI-based Alzheimer's disease prediction via distilling the knowledge in
multi-modal data
- Authors: Hao Guan (1), Chaoyue Wang (1), Dacheng Tao (1) ((1) School of
Computer Science, The University of Sydney)
- Abstract summary: We propose a multi-modal multi-instance distillation scheme, which aims to distill the knowledge learned from multi-modal data to an MRI-based network for MCI conversion prediction.
To our best knowledge, this is the first study that attempts to improve an MRI-based prediction model by leveraging extra supervision distilled from multi-modal information.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Mild cognitive impairment (MCI) conversion prediction, i.e., identifying MCI
patients of high risks converting to Alzheimer's disease (AD), is essential for
preventing or slowing the progression of AD. Although previous studies have
shown that the fusion of multi-modal data can effectively improve the
prediction accuracy, their applications are largely restricted by the limited
availability or high cost of multi-modal data. Building an effective prediction
model using only magnetic resonance imaging (MRI) remains a challenging
research topic. In this work, we propose a multi-modal multi-instance
distillation scheme, which aims to distill the knowledge learned from
multi-modal data to an MRI-based network for MCI conversion prediction. In
contrast to existing distillation algorithms, the proposed multi-instance
probabilities demonstrate a superior capability of representing the complicated
atrophy distributions, and can guide the MRI-based network to better explore
the input MRI. To our best knowledge, this is the first study that attempts to
improve an MRI-based prediction model by leveraging extra supervision distilled
from multi-modal information. Experiments demonstrate the advantage of our
framework, suggesting its potentials in the data-limited clinical settings.
Related papers
- Continually Evolved Multimodal Foundation Models for Cancer Prognosis [50.43145292874533]
Cancer prognosis is a critical task that involves predicting patient outcomes and survival rates.
Previous studies have integrated diverse data modalities, such as clinical notes, medical images, and genomic data, leveraging their complementary information.
Existing approaches face two major limitations. First, they struggle to incorporate newly arrived data with varying distributions into training, such as patient records from different hospitals.
Second, most multimodal integration methods rely on simplistic concatenation or task-specific pipelines, which fail to capture the complex interdependencies across modalities.
arXiv Detail & Related papers (2025-01-30T06:49:57Z) - ITCFN: Incomplete Triple-Modal Co-Attention Fusion Network for Mild Cognitive Impairment Conversion Prediction [12.893857146169045]
Alzheimer's disease (AD) is a common neurodegenerative disease among the elderly.
Early prediction and timely intervention of its prodromal stage, mild cognitive impairment (MCI), can decrease the risk of advancing to AD.
arXiv Detail & Related papers (2025-01-20T05:12:31Z) - RADAR: Robust Two-stage Modality-incomplete Industrial Anomaly Detection [61.71770293720491]
We propose a novel two-stage Robust modAlity-imcomplete fusing and Detecting frAmewoRk, abbreviated as RADAR.
Our bootstrapping philosophy is to enhance two stages in MIIAD, improving the robustness of the Multimodal Transformer.
Our experimental results demonstrate that the proposed RADAR significantly surpasses conventional MIAD methods in terms of effectiveness and robustness.
arXiv Detail & Related papers (2024-10-02T16:47:55Z) - DDEvENet: Evidence-based Ensemble Learning for Uncertainty-aware Brain Parcellation Using Diffusion MRI [5.757390718589337]
EVENet is an Evidence-based Ensemble Neural Network for anatomical brain parcellation using diffusion MRI.
We obtained accurate parcellation and uncertainty estimates across different datasets from healthy and clinical populations.
Thanks to the uncertainty estimation, our EVENet approach demonstrates a good ability to detect abnormal brain regions in patients with lesions.
arXiv Detail & Related papers (2024-09-11T05:26:23Z) - M2EF-NNs: Multimodal Multi-instance Evidence Fusion Neural Networks for Cancer Survival Prediction [24.323961146023358]
We propose a neural network model called M2EF-NNs for accurate cancer survival prediction.
To capture global information in the images, we use a pre-trained Vision Transformer (ViT) model.
We are the first to apply the Dempster-Shafer evidence theory (DST) to cancer survival prediction.
arXiv Detail & Related papers (2024-08-08T02:31:04Z) - XAI for In-hospital Mortality Prediction via Multimodal ICU Data [57.73357047856416]
We propose an efficient, explainable AI solution for predicting in-hospital mortality via multimodal ICU data.
We employ multimodal learning in our framework, which can receive heterogeneous inputs from clinical data and make decisions.
Our framework can be easily transferred to other clinical tasks, which facilitates the discovery of crucial factors in healthcare research.
arXiv Detail & Related papers (2023-12-29T14:28:04Z) - Source-Free Collaborative Domain Adaptation via Multi-Perspective
Feature Enrichment for Functional MRI Analysis [55.03872260158717]
Resting-state MRI functional (rs-fMRI) is increasingly employed in multi-site research to aid neurological disorder analysis.
Many methods have been proposed to reduce fMRI heterogeneity between source and target domains.
But acquiring source data is challenging due to concerns and/or data storage burdens in multi-site studies.
We design a source-free collaborative domain adaptation framework for fMRI analysis, where only a pretrained source model and unlabeled target data are accessible.
arXiv Detail & Related papers (2023-08-24T01:30:18Z) - Predicting multiple sclerosis disease severity with multimodal deep
neural networks [10.599189568556508]
We describe a pilot effort to leverage structured EHR data, neuroimaging data and clinical notes to build a multi-modal deep learning framework to predict patient's MS disease severity.
The proposed pipeline demonstrates up to 25% increase in terms of the area under the Area Under the Receiver Operating Characteristic curve (AUROC) compared to models using single-modal data.
arXiv Detail & Related papers (2023-04-08T16:23:18Z) - Bayesian Uncertainty Estimation of Learned Variational MRI
Reconstruction [63.202627467245584]
We introduce a Bayesian variational framework to quantify the model-immanent (epistemic) uncertainty.
We demonstrate that our approach yields competitive results for undersampled MRI reconstruction.
arXiv Detail & Related papers (2021-02-12T18:08:14Z) - M2Net: Multi-modal Multi-channel Network for Overall Survival Time
Prediction of Brain Tumor Patients [151.4352001822956]
Early and accurate prediction of overall survival (OS) time can help to obtain better treatment planning for brain tumor patients.
Existing prediction methods rely on radiomic features at the local lesion area of a magnetic resonance (MR) volume.
We propose an end-to-end OS time prediction model; namely, Multi-modal Multi-channel Network (M2Net)
arXiv Detail & Related papers (2020-06-01T05:21:37Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.