On self-supervised multi-modal representation learning: An application
to Alzheimer's disease
- URL: http://arxiv.org/abs/2012.13619v1
- Date: Fri, 25 Dec 2020 19:51:19 GMT
- Title: On self-supervised multi-modal representation learning: An application
to Alzheimer's disease
- Authors: Alex Fedorov, Lei Wu, Tristan Sylvain, Margaux Luck, Thomas P.
DeRamus, Dmitry Bleklov, Sergey M. Plis, Vince D. Calhoun
- Abstract summary: Introspection of deep supervised predictive models trained on functional and structural brain imaging may uncover novel markers of Alzheimer's disease (AD)
Deep unsupervised and, recently, contrastive self-supervised approaches, not biased to classification, are better candidates for the task.
- Score: 21.495288589801476
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Introspection of deep supervised predictive models trained on functional and
structural brain imaging may uncover novel markers of Alzheimer's disease (AD).
However, supervised training is prone to learning from spurious features
(shortcut learning) impairing its value in the discovery process. Deep
unsupervised and, recently, contrastive self-supervised approaches, not biased
to classification, are better candidates for the task. Their multimodal options
specifically offer additional regularization via modality interactions. In this
paper, we introduce a way to exhaustively consider multimodal architectures for
contrastive self-supervised fusion of fMRI and MRI of AD patients and controls.
We show that this multimodal fusion results in representations that improve the
results of the downstream classification for both modalities. We investigate
the fused self-supervised features projected into the brain space and introduce
a numerically stable way to do so.
Related papers
- Towards Within-Class Variation in Alzheimer's Disease Detection from Spontaneous Speech [60.08015780474457]
Alzheimer's Disease (AD) detection has emerged as a promising research area that employs machine learning classification models.
We identify within-class variation as a critical challenge in AD detection: individuals with AD exhibit a spectrum of cognitive impairments.
We propose two novel methods: Soft Target Distillation (SoTD) and Instance-level Re-balancing (InRe), targeting two problems respectively.
arXiv Detail & Related papers (2024-09-22T02:06:05Z) - Unsupervised Continual Anomaly Detection with Contrastively-learned
Prompt [80.43623986759691]
We introduce a novel Unsupervised Continual Anomaly Detection framework called UCAD.
The framework equips the UAD with continual learning capability through contrastively-learned prompts.
We conduct comprehensive experiments and set the benchmark on unsupervised continual anomaly detection and segmentation.
arXiv Detail & Related papers (2024-01-02T03:37:11Z) - Joint Self-Supervised and Supervised Contrastive Learning for Multimodal
MRI Data: Towards Predicting Abnormal Neurodevelopment [5.771221868064265]
We present a novel joint self-supervised and supervised contrastive learning method to learn the robust latent feature representation from multimodal MRI data.
Our method has the capability to facilitate computer-aided diagnosis within clinical practice, harnessing the power of multimodal data.
arXiv Detail & Related papers (2023-12-22T21:05:51Z) - I$^2$MD: 3D Action Representation Learning with Inter- and Intra-modal
Mutual Distillation [147.2183428328396]
We introduce a general Inter- and Intra-modal Mutual Distillation (I$2$MD) framework.
In I$2$MD, we first re-formulate the cross-modal interaction as a Cross-modal Mutual Distillation (CMD) process.
To alleviate the interference of similar samples and exploit their underlying contexts, we further design the Intra-modal Mutual Distillation (IMD) strategy.
arXiv Detail & Related papers (2023-10-24T07:22:17Z) - Fusing Structural and Functional Connectivities using Disentangled VAE
for Detecting MCI [9.916963496386089]
A novel hierarchical structural-functional connectivity fusing (HSCF) model is proposed to construct brain structural-functional connectivity matrices.
Results from a wide range of tests performed on the public Alzheimer's Disease Neuroimaging Initiative database show that the proposed model performs better than competing approaches.
arXiv Detail & Related papers (2023-06-16T05:22:25Z) - Patched Diffusion Models for Unsupervised Anomaly Detection in Brain MRI [55.78588835407174]
We propose a method that reformulates the generation task of diffusion models as a patch-based estimation of healthy brain anatomy.
We evaluate our approach on data of tumors and multiple sclerosis lesions and demonstrate a relative improvement of 25.1% compared to existing baselines.
arXiv Detail & Related papers (2023-03-07T09:40:22Z) - Self-supervised multimodal neuroimaging yields predictive
representations for a spectrum of Alzheimer's phenotypes [27.331511924585023]
This work presents a novel multi-scale coordinated framework for learning multiple representations from multimodal neuroimaging data.
We propose a general taxonomy of informative inductive biases to capture unique and joint information in multimodal self-supervised fusion.
We show that self-supervised models reveal disorder-relevant brain regions and multimodal links without access to the labels during pre-training.
arXiv Detail & Related papers (2022-09-07T01:37:19Z) - MMLatch: Bottom-up Top-down Fusion for Multimodal Sentiment Analysis [84.7287684402508]
Current deep learning approaches for multimodal fusion rely on bottom-up fusion of high and mid-level latent modality representations.
Models of human perception highlight the importance of top-down fusion, where high-level representations affect the way sensory inputs are perceived.
We propose a neural architecture that captures top-down cross-modal interactions, using a feedback mechanism in the forward pass during network training.
arXiv Detail & Related papers (2022-01-24T17:48:04Z) - Unsupervised deep learning techniques for powdery mildew recognition
based on multispectral imaging [63.62764375279861]
This paper presents a deep learning approach to automatically recognize powdery mildew on cucumber leaves.
We focus on unsupervised deep learning techniques applied to multispectral imaging data.
We propose the use of autoencoder architectures to investigate two strategies for disease detection.
arXiv Detail & Related papers (2021-12-20T13:29:13Z) - A Prior Guided Adversarial Representation Learning and Hypergraph
Perceptual Network for Predicting Abnormal Connections of Alzheimer's Disease [29.30199956567813]
Alzheimer's disease is characterized by alterations of the brain's structural and functional connectivity.
PGARL-HPN is proposed to predict abnormal brain connections using triple-modality medical images.
arXiv Detail & Related papers (2021-10-12T03:10:37Z) - Self-Supervised Multimodal Domino: in Search of Biomarkers for
Alzheimer's Disease [19.86082635340699]
We propose a taxonomy of all reasonable ways to organize self-supervised representation-learning algorithms.
We first evaluate models on toy multimodal MNIST datasets and then apply them to a multimodal neuroimaging dataset with Alzheimer's disease patients.
Results show that the proposed approach outperforms previous self-supervised encoder-decoder methods.
arXiv Detail & Related papers (2020-12-25T20:28:13Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.