LSOR: Longitudinally-Consistent Self-Organized Representation Learning
- URL: http://arxiv.org/abs/2310.00213v1
- Date: Sat, 30 Sep 2023 01:31:24 GMT
- Title: LSOR: Longitudinally-Consistent Self-Organized Representation Learning
- Authors: Jiahong Ouyang, Qingyu Zhao, Ehsan Adeli, Wei Peng, Greg Zaharchuk,
Kilian M. Pohl
- Abstract summary: Interpretability is a key issue when applying deep learning models to longitudinal brain MRIs.
One way to address this issue is by visualizing the high-dimensional latent spaces generated by deep learning via self-organizing maps (SOM)
We propose the first self-supervised SOM approach that derives a high-dimensional, interpretable representation stratified by brain age.
- Score: 14.10874160164196
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Interpretability is a key issue when applying deep learning models to
longitudinal brain MRIs. One way to address this issue is by visualizing the
high-dimensional latent spaces generated by deep learning via self-organizing
maps (SOM). SOM separates the latent space into clusters and then maps the
cluster centers to a discrete (typically 2D) grid preserving the
high-dimensional relationship between clusters. However, learning SOM in a
high-dimensional latent space tends to be unstable, especially in a
self-supervision setting. Furthermore, the learned SOM grid does not
necessarily capture clinically interesting information, such as brain age. To
resolve these issues, we propose the first self-supervised SOM approach that
derives a high-dimensional, interpretable representation stratified by brain
age solely based on longitudinal brain MRIs (i.e., without demographic or
cognitive information). Called Longitudinally-consistent Self-Organized
Representation learning (LSOR), the method is stable during training as it
relies on soft clustering (vs. the hard cluster assignments used by existing
SOM). Furthermore, our approach generates a latent space stratified according
to brain age by aligning trajectories inferred from longitudinal MRIs to the
reference vector associated with the corresponding SOM cluster. When applied to
longitudinal MRIs of the Alzheimer's Disease Neuroimaging Initiative (ADNI,
N=632), LSOR generates an interpretable latent space and achieves comparable or
higher accuracy than the state-of-the-art representations with respect to the
downstream tasks of classification (static vs. progressive mild cognitive
impairment) and regression (determining ADAS-Cog score of all subjects). The
code is available at
https://github.com/ouyangjiahong/longitudinal-som-single-modality.
Related papers
- Generative forecasting of brain activity enhances Alzheimer's classification and interpretation [16.09844316281377]
Resting-state functional magnetic resonance imaging (rs-fMRI) offers a non-invasive method to monitor neural activity.
Deep learning has shown promise in capturing these representations.
In this study, we focus on time series forecasting of independent component networks derived from rs-fMRI as a form of data augmentation.
arXiv Detail & Related papers (2024-10-30T23:51:31Z) - Knowledge-Guided Prompt Learning for Lifespan Brain MR Image Segmentation [53.70131202548981]
We present a two-step segmentation framework employing Knowledge-Guided Prompt Learning (KGPL) for brain MRI.
Specifically, we first pre-train segmentation models on large-scale datasets with sub-optimal labels.
The introduction of knowledge-wise prompts captures semantic relationships between anatomical variability and biological processes.
arXiv Detail & Related papers (2024-07-31T04:32:43Z) - Neuro-mimetic Task-free Unsupervised Online Learning with Continual
Self-Organizing Maps [56.827895559823126]
Self-organizing map (SOM) is a neural model often used in clustering and dimensionality reduction.
We propose a generalization of the SOM, the continual SOM, which is capable of online unsupervised learning under a low memory budget.
Our results, on benchmarks including MNIST, Kuzushiji-MNIST, and Fashion-MNIST, show almost a two times increase in accuracy.
arXiv Detail & Related papers (2024-02-19T19:11:22Z) - Spatial-Temporal DAG Convolutional Networks for End-to-End Joint
Effective Connectivity Learning and Resting-State fMRI Classification [42.82118108887965]
Building comprehensive brain connectomes has proved to be fundamental importance in resting-state fMRI (rs-fMRI) analysis.
We model the brain network as a directed acyclic graph (DAG) to discover direct causal connections between brain regions.
We propose Spatial-Temporal DAG Convolutional Network (ST-DAGCN) to jointly infer effective connectivity and classify rs-fMRI time series.
arXiv Detail & Related papers (2023-12-16T04:31:51Z) - Self-Supervised Neuron Segmentation with Multi-Agent Reinforcement
Learning [53.00683059396803]
Mask image model (MIM) has been widely used due to its simplicity and effectiveness in recovering original information from masked images.
We propose a decision-based MIM that utilizes reinforcement learning (RL) to automatically search for optimal image masking ratio and masking strategy.
Our approach has a significant advantage over alternative self-supervised methods on the task of neuron segmentation.
arXiv Detail & Related papers (2023-10-06T10:40:46Z) - Superficial White Matter Analysis: An Efficient Point-cloud-based Deep
Learning Framework with Supervised Contrastive Learning for Consistent
Tractography Parcellation across Populations and dMRI Acquisitions [68.41088365582831]
White matter parcellation classifies tractography streamlines into clusters or anatomically meaningful tracts.
Most parcellation methods focus on the deep white matter (DWM), whereas fewer methods address the superficial white matter (SWM) due to its complexity.
We propose a novel two-stage deep-learning-based framework, Superficial White Matter Analysis (SupWMA), that performs an efficient parcellation of 198 SWM clusters from whole-brain tractography.
arXiv Detail & Related papers (2022-07-18T23:07:53Z) - Reducing Catastrophic Forgetting in Self Organizing Maps with
Internally-Induced Generative Replay [67.50637511633212]
A lifelong learning agent is able to continually learn from potentially infinite streams of pattern sensory data.
One major historic difficulty in building agents that adapt is that neural systems struggle to retain previously-acquired knowledge when learning from new samples.
This problem is known as catastrophic forgetting (interference) and remains an unsolved problem in the domain of machine learning to this day.
arXiv Detail & Related papers (2021-12-09T07:11:14Z) - Dendritic Self-Organizing Maps for Continual Learning [0.0]
We propose a novel algorithm inspired by biological neurons, termed Dendritic-Self-Organizing Map (DendSOM)
DendSOM consists of a single layer of SOMs, which extract patterns from specific regions of the input space.
It outperforms classical SOMs and several state-of-the-art continual learning algorithms on benchmark datasets.
arXiv Detail & Related papers (2021-10-18T14:47:19Z) - Self-Supervised Longitudinal Neighbourhood Embedding [13.633165258766418]
We propose a self-supervised strategy for representation learning named Longitudinal Neighborhood Embedding.
Motivated by concepts in contrastive learning, LNE explicitly models the similarity between trajectory vectors across different subjects.
We apply LNE to longitudinal T1w MRIs of two neuroimaging studies: a dataset composed of 274 healthy subjects, and Alzheimer's Disease Neuroimaging Initiative.
arXiv Detail & Related papers (2021-03-05T17:55:53Z) - Statistical control for spatio-temporal MEG/EEG source imaging with
desparsified multi-task Lasso [102.84915019938413]
Non-invasive techniques like magnetoencephalography (MEG) or electroencephalography (EEG) offer promise of non-invasive techniques.
The problem of source localization, or source imaging, poses however a high-dimensional statistical inference challenge.
We propose an ensemble of desparsified multi-task Lasso (ecd-MTLasso) to deal with this problem.
arXiv Detail & Related papers (2020-09-29T21:17:16Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.