Multimodal and multicontrast image fusion via deep generative models
- URL: http://arxiv.org/abs/2303.15963v2
- Date: Tue, 27 Feb 2024 12:42:15 GMT
- Title: Multimodal and multicontrast image fusion via deep generative models
- Authors: Giovanna Maria Dimitri, Simeon Spasov, Andrea Duggento, Luca
Passamonti, Pietro Li`o, Nicola Toschi
- Abstract summary: We propose a deep learning architecture based on generative models rooted in a modular approach and separable convolutional blocks to fuse multiple 3D neuroimaging modalities on a voxel-wise level.
This may be of aid in predicting disease evolution as well as drug response, hence supporting mechanistic understanding and empowering clinical trials.
- Score: 3.431015735214097
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Recently, it has become progressively more evident that classic diagnostic
labels are unable to reliably describe the complexity and variability of
several clinical phenotypes. This is particularly true for a broad range of
neuropsychiatric illnesses (e.g., depression, anxiety disorders, behavioral
phenotypes). Patient heterogeneity can be better described by grouping
individuals into novel categories based on empirically derived sections of
intersecting continua that span across and beyond traditional categorical
borders. In this context, neuroimaging data carry a wealth of spatiotemporally
resolved information about each patient's brain. However, they are usually
heavily collapsed a priori through procedures which are not learned as part of
model training, and consequently not optimized for the downstream prediction
task. This is because every individual participant usually comes with multiple
whole-brain 3D imaging modalities often accompanied by a deep genotypic and
phenotypic characterization, hence posing formidable computational challenges.
In this paper we design a deep learning architecture based on generative models
rooted in a modular approach and separable convolutional blocks to a) fuse
multiple 3D neuroimaging modalities on a voxel-wise level, b) convert them into
informative latent embeddings through heavy dimensionality reduction, c)
maintain good generalizability and minimal information loss. As proof of
concept, we test our architecture on the well characterized Human Connectome
Project database demonstrating that our latent embeddings can be clustered into
easily separable subject strata which, in turn, map to different phenotypical
information which was not included in the embedding creation process. This may
be of aid in predicting disease evolution as well as drug response, hence
supporting mechanistic disease understanding and empowering clinical trials.
Related papers
- Deep Latent Variable Modeling of Physiological Signals [0.8702432681310401]
We explore high-dimensional problems related to physiological monitoring using latent variable models.
First, we present a novel deep state-space model to generate electrical waveforms of the heart using optically obtained signals as inputs.
Second, we present a brain signal modeling scheme that combines the strengths of probabilistic graphical models and deep adversarial learning.
Third, we propose a framework for the joint modeling of physiological measures and behavior.
arXiv Detail & Related papers (2024-05-29T17:07:33Z) - MindBridge: A Cross-Subject Brain Decoding Framework [60.58552697067837]
Brain decoding aims to reconstruct stimuli from acquired brain signals.
Currently, brain decoding is confined to a per-subject-per-model paradigm.
We present MindBridge, that achieves cross-subject brain decoding by employing only one model.
arXiv Detail & Related papers (2024-04-11T15:46:42Z) - Seeing Unseen: Discover Novel Biomedical Concepts via
Geometry-Constrained Probabilistic Modeling [53.7117640028211]
We present a geometry-constrained probabilistic modeling treatment to resolve the identified issues.
We incorporate a suite of critical geometric properties to impose proper constraints on the layout of constructed embedding space.
A spectral graph-theoretic method is devised to estimate the number of potential novel classes.
arXiv Detail & Related papers (2024-03-02T00:56:05Z) - Robust and Interpretable Medical Image Classifiers via Concept
Bottleneck Models [49.95603725998561]
We propose a new paradigm to build robust and interpretable medical image classifiers with natural language concepts.
Specifically, we first query clinical concepts from GPT-4, then transform latent image features into explicit concepts with a vision-language model.
arXiv Detail & Related papers (2023-10-04T21:57:09Z) - Incomplete Multimodal Learning for Complex Brain Disorders Prediction [65.95783479249745]
We propose a new incomplete multimodal data integration approach that employs transformers and generative adversarial networks.
We apply our new method to predict cognitive degeneration and disease outcomes using the multimodal imaging genetic data from Alzheimer's Disease Neuroimaging Initiative cohort.
arXiv Detail & Related papers (2023-05-25T16:29:16Z) - Improving Deep Facial Phenotyping for Ultra-rare Disorder Verification
Using Model Ensembles [52.77024349608834]
We analyze the influence of replacing a DCNN with a state-of-the-art face recognition approach, iResNet with ArcFace.
Our proposed ensemble model achieves state-of-the-art performance on both seen and unseen disorders.
arXiv Detail & Related papers (2022-11-12T23:28:54Z) - Self-supervised multimodal neuroimaging yields predictive
representations for a spectrum of Alzheimer's phenotypes [27.331511924585023]
This work presents a novel multi-scale coordinated framework for learning multiple representations from multimodal neuroimaging data.
We propose a general taxonomy of informative inductive biases to capture unique and joint information in multimodal self-supervised fusion.
We show that self-supervised models reveal disorder-relevant brain regions and multimodal links without access to the labels during pre-training.
arXiv Detail & Related papers (2022-09-07T01:37:19Z) - Deep Structural Causal Shape Models [21.591869329812283]
Causal reasoning provides a language to ask important interventional and counterfactual questions.
In medical imaging, we may want to study the causal effect of genetic, environmental, or lifestyle factors.
There is a lack of computational tooling to enable causal reasoning about morphological variations.
arXiv Detail & Related papers (2022-08-23T13:18:20Z) - Multimodal Representations Learning and Adversarial Hypergraph Fusion
for Early Alzheimer's Disease Prediction [30.99183477161096]
We propose a novel representation learning and adversarial hypergraph fusion framework for Alzheimer's disease diagnosis.
Our model achieves superior performance on Alzheimer's disease detection compared with other related models.
arXiv Detail & Related papers (2021-07-21T08:08:05Z) - Modeling Shared Responses in Neuroimaging Studies through MultiView ICA [94.31804763196116]
Group studies involving large cohorts of subjects are important to draw general conclusions about brain functional organization.
We propose a novel MultiView Independent Component Analysis model for group studies, where data from each subject are modeled as a linear combination of shared independent sources plus noise.
We demonstrate the usefulness of our approach first on fMRI data, where our model demonstrates improved sensitivity in identifying common sources among subjects.
arXiv Detail & Related papers (2020-06-11T17:29:53Z) - Towards a predictive spatio-temporal representation of brain data [0.2580765958706854]
We show that fMRI datasets are constituted by complex and highly heterogeneous timeseries.
We compare various modelling techniques from deep learning and geometric deep learning to pave the way for future research.
We hope that our methodological advances can ultimately be clinically and computationally relevant by leading to a more nuanced understanding of the brain dynamics in health and disease.
arXiv Detail & Related papers (2020-02-29T18:49:45Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.