Whole MILC: generalizing learned dynamics across tasks, datasets, and
populations
- URL: http://arxiv.org/abs/2007.16041v2
- Date: Fri, 18 Jun 2021 20:12:38 GMT
- Title: Whole MILC: generalizing learned dynamics across tasks, datasets, and
populations
- Authors: Usman Mahmood, Md Mahfuzur Rahman, Alex Fedorov, Noah Lewis, Zening
Fu, Vince D. Calhoun, Sergey M. Plis
- Abstract summary: Theory of disorder-specific dynamics crucial for early diagnosis and understanding disorder mechanism.
In this paper we present a novel supervised training model which reinforces whole mutual sequence information local to context.
We test our model on three different disorders (i) Schizophrenia (ii) and (iii) Alzheimers and four different studies.
- Score: 14.99255412075299
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Behavioral changes are the earliest signs of a mental disorder, but arguably,
the dynamics of brain function gets affected even earlier. Subsequently,
spatio-temporal structure of disorder-specific dynamics is crucial for early
diagnosis and understanding the disorder mechanism. A common way of learning
discriminatory features relies on training a classifier and evaluating feature
importance. Classical classifiers, based on handcrafted features are quite
powerful, but suffer the curse of dimensionality when applied to large input
dimensions of spatio-temporal data. Deep learning algorithms could handle the
problem and a model introspection could highlight discriminatory
spatio-temporal regions but need way more samples to train. In this paper we
present a novel self supervised training schema which reinforces whole sequence
mutual information local to context (whole MILC). We pre-train the whole MILC
model on unlabeled and unrelated healthy control data. We test our model on
three different disorders (i) Schizophrenia (ii) Autism and (iii) Alzheimers
and four different studies. Our algorithm outperforms existing self-supervised
pre-training methods and provides competitive classification results to
classical machine learning algorithms. Importantly, whole MILC enables
attribution of subject diagnosis to specific spatio-temporal regions in the
fMRI signal.
Related papers
- Generative forecasting of brain activity enhances Alzheimer's classification and interpretation [16.09844316281377]
Resting-state functional magnetic resonance imaging (rs-fMRI) offers a non-invasive method to monitor neural activity.
Deep learning has shown promise in capturing these representations.
In this study, we focus on time series forecasting of independent component networks derived from rs-fMRI as a form of data augmentation.
arXiv Detail & Related papers (2024-10-30T23:51:31Z) - Machine Learning on Dynamic Functional Connectivity: Promise, Pitfalls, and Interpretations [7.013079422694949]
We seek to establish a well-founded empirical guideline for designing deep models for functional neuroimages.
We put the spotlight on (1) What is the current state-of-the-arts (SOTA) performance in cognitive task recognition and disease diagnosis using fMRI?
We have conducted a comprehensive evaluation and statistical analysis, in various settings, to answer the above outstanding questions.
arXiv Detail & Related papers (2024-09-17T17:24:17Z) - Toward Robust Early Detection of Alzheimer's Disease via an Integrated Multimodal Learning Approach [5.9091823080038814]
Alzheimer's Disease (AD) is a complex neurodegenerative disorder marked by memory loss, executive dysfunction, and personality changes.
This study introduces an advanced multimodal classification model that integrates clinical, cognitive, neuroimaging, and EEG data.
arXiv Detail & Related papers (2024-08-29T08:26:00Z) - UniBrain: Universal Brain MRI Diagnosis with Hierarchical
Knowledge-enhanced Pre-training [66.16134293168535]
We propose a hierarchical knowledge-enhanced pre-training framework for the universal brain MRI diagnosis, termed as UniBrain.
Specifically, UniBrain leverages a large-scale dataset of 24,770 imaging-report pairs from routine diagnostics.
arXiv Detail & Related papers (2023-09-13T09:22:49Z) - Self-supervised multimodal neuroimaging yields predictive
representations for a spectrum of Alzheimer's phenotypes [27.331511924585023]
This work presents a novel multi-scale coordinated framework for learning multiple representations from multimodal neuroimaging data.
We propose a general taxonomy of informative inductive biases to capture unique and joint information in multimodal self-supervised fusion.
We show that self-supervised models reveal disorder-relevant brain regions and multimodal links without access to the labels during pre-training.
arXiv Detail & Related papers (2022-09-07T01:37:19Z) - Reducing Catastrophic Forgetting in Self Organizing Maps with
Internally-Induced Generative Replay [67.50637511633212]
A lifelong learning agent is able to continually learn from potentially infinite streams of pattern sensory data.
One major historic difficulty in building agents that adapt is that neural systems struggle to retain previously-acquired knowledge when learning from new samples.
This problem is known as catastrophic forgetting (interference) and remains an unsolved problem in the domain of machine learning to this day.
arXiv Detail & Related papers (2021-12-09T07:11:14Z) - MIRACLE: Causally-Aware Imputation via Learning Missing Data Mechanisms [82.90843777097606]
We propose a causally-aware imputation algorithm (MIRACLE) for missing data.
MIRACLE iteratively refines the imputation of a baseline by simultaneously modeling the missingness generating mechanism.
We conduct extensive experiments on synthetic and a variety of publicly available datasets to show that MIRACLE is able to consistently improve imputation.
arXiv Detail & Related papers (2021-11-04T22:38:18Z) - Learning Neural Causal Models with Active Interventions [83.44636110899742]
We introduce an active intervention-targeting mechanism which enables a quick identification of the underlying causal structure of the data-generating process.
Our method significantly reduces the required number of interactions compared with random intervention targeting.
We demonstrate superior performance on multiple benchmarks from simulated to real-world data.
arXiv Detail & Related papers (2021-09-06T13:10:37Z) - The Causal Neural Connection: Expressiveness, Learnability, and
Inference [125.57815987218756]
An object called structural causal model (SCM) represents a collection of mechanisms and sources of random variation of the system under investigation.
In this paper, we show that the causal hierarchy theorem (Thm. 1, Bareinboim et al., 2020) still holds for neural models.
We introduce a special type of SCM called a neural causal model (NCM), and formalize a new type of inductive bias to encode structural constraints necessary for performing causal inferences.
arXiv Detail & Related papers (2021-07-02T01:55:18Z) - 4D Spatio-Temporal Deep Learning with 4D fMRI Data for Autism Spectrum
Disorder Classification [69.62333053044712]
We propose a 4D convolutional deep learning approach for ASD classification where we jointly learn from spatial and temporal data.
We employ 4D neural networks and convolutional-recurrent models which outperform a previous approach with an F1-score of 0.71 compared to an F1-score of 0.65.
arXiv Detail & Related papers (2020-04-21T17:19:06Z) - Ensemble Deep Learning on Large, Mixed-Site fMRI Datasets in Autism and
Other Tasks [0.1160208922584163]
We train a convolutional neural network (CNN) with the largest multi-source, functional MRI (fMRI) connectomic dataset ever compiled.
Our study finds that deep learning models that distinguish ASD from TD controls focus broadly on temporal and cerebellar connections.
arXiv Detail & Related papers (2020-02-14T17:28:16Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.