G-MIND: An End-to-End Multimodal Imaging-Genetics Framework for
Biomarker Identification and Disease Classification
- URL: http://arxiv.org/abs/2101.11656v1
- Date: Wed, 27 Jan 2021 19:28:04 GMT
- Title: G-MIND: An End-to-End Multimodal Imaging-Genetics Framework for
Biomarker Identification and Disease Classification
- Authors: Sayan Ghosal, Qiang Chen, Giulio Pergola, Aaron L. Goldman, William
Ulrich, Karen F. Berman, Giuseppe Blasi, Leonardo Fazio, Antonio Rampino,
Alessandro Bertolino, Daniel R. Weinberger, Venkata S. Mattay, and Archana
Venkataraman
- Abstract summary: We propose a novel deep neural network architecture to integrate imaging and genetics data, as guided by diagnosis, that provides interpretable biomarkers.
We have evaluated our model on a population study of schizophrenia that includes two functional MRI (fMRI) paradigms and Single Nucleotide Polymorphism (SNP) data.
- Score: 49.53651166356737
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: We propose a novel deep neural network architecture to integrate imaging and
genetics data, as guided by diagnosis, that provides interpretable biomarkers.
Our model consists of an encoder, a decoder and a classifier. The encoder
learns a non-linear subspace shared between the input data modalities. The
classifier and the decoder act as regularizers to ensure that the
low-dimensional encoding captures predictive differences between patients and
controls. We use a learnable dropout layer to extract interpretable biomarkers
from the data, and our unique training strategy can easily accommodate missing
data modalities across subjects. We have evaluated our model on a population
study of schizophrenia that includes two functional MRI (fMRI) paradigms and
Single Nucleotide Polymorphism (SNP) data. Using 10-fold cross validation, we
demonstrate that our model achieves better classification accuracy than
baseline methods, and that this performance generalizes to a second dataset
collected at a different site. In an exploratory analysis we further show that
the biomarkers identified by our model are closely associated with the
well-documented deficits in schizophrenia.
Related papers
- UNICORN: A Deep Learning Model for Integrating Multi-Stain Data in Histopathology [2.9389205138207277]
UNICORN is a multi-modal transformer capable of processing multi-stain histopathology for atherosclerosis severity class prediction.
The architecture comprises a two-stage, end-to-end trainable model with specialized modules utilizing transformer self-attention blocks.
UNICORN achieved a classification accuracy of 0.67, outperforming other state-of-the-art models.
arXiv Detail & Related papers (2024-09-26T12:13:52Z) - Graph-Ensemble Learning Model for Multi-label Skin Lesion Classification
using Dermoscopy and Clinical Images [7.159532626507458]
This study introduces a Graph Convolution Network (GCN) to exploit prior co-occurrence between each category as a correlation matrix into the deep learning model for the multi-label classification.
We propose a Graph-Ensemble Learning Model (GELN) that views the prediction from GCN as complementary information of the predictions from the fusion model.
arXiv Detail & Related papers (2023-07-04T13:19:57Z) - Modality Completion via Gaussian Process Prior Variational Autoencoders
for Multi-Modal Glioma Segmentation [75.58395328700821]
We propose a novel model, Multi-modal Gaussian Process Prior Variational Autoencoder (MGP-VAE), to impute one or more missing sub-modalities for a patient scan.
MGP-VAE can leverage the Gaussian Process (GP) prior on the Variational Autoencoder (VAE) to utilize the subjects/patients and sub-modalities correlations.
We show the applicability of MGP-VAE on brain tumor segmentation where either, two, or three of four sub-modalities may be missing.
arXiv Detail & Related papers (2021-07-07T19:06:34Z) - ICAM-reg: Interpretable Classification and Regression with Feature
Attribution for Mapping Neurological Phenotypes in Individual Scans [3.589107822343127]
We take advantage of recent developments in generative deep learning to develop a method for simultaneous classification, or regression, and feature attribution.
We validate our method on the tasks of Mini-Mental State Examination (MMSE) cognitive test score prediction for the Alzheimer's Disease Neuroimaging Initiative cohort.
We show that the generated FA maps can be used to explain outlier predictions and demonstrate that the inclusion of a regression module improves the disentanglement of the latent space.
arXiv Detail & Related papers (2021-03-03T17:55:14Z) - Many-to-One Distribution Learning and K-Nearest Neighbor Smoothing for
Thoracic Disease Identification [83.6017225363714]
deep learning has become the most powerful computer-aided diagnosis technology for improving disease identification performance.
For chest X-ray imaging, annotating large-scale data requires professional domain knowledge and is time-consuming.
In this paper, we propose many-to-one distribution learning (MODL) and K-nearest neighbor smoothing (KNNS) methods to improve a single model's disease identification performance.
arXiv Detail & Related papers (2021-02-26T02:29:30Z) - Ensemble manifold based regularized multi-modal graph convolutional
network for cognitive ability prediction [33.03449099154264]
Multi-modal functional magnetic resonance imaging (fMRI) can be used to make predictions about individual behavioral and cognitive traits based on brain connectivity networks.
We propose an interpretable multi-modal graph convolutional network (MGCN) model, incorporating the fMRI time series and the functional connectivity (FC) between each pair of brain regions.
We validate our MGCN model on the Philadelphia Neurodevelopmental Cohort to predict individual wide range achievement test (WRAT) score.
arXiv Detail & Related papers (2021-01-20T20:53:07Z) - Select-ProtoNet: Learning to Select for Few-Shot Disease Subtype
Prediction [55.94378672172967]
We focus on few-shot disease subtype prediction problem, identifying subgroups of similar patients.
We introduce meta learning techniques to develop a new model, which can extract the common experience or knowledge from interrelated clinical tasks.
Our new model is built upon a carefully designed meta-learner, called Prototypical Network, that is a simple yet effective meta learning machine for few-shot image classification.
arXiv Detail & Related papers (2020-09-02T02:50:30Z) - Improved Slice-wise Tumour Detection in Brain MRIs by Computing
Dissimilarities between Latent Representations [68.8204255655161]
Anomaly detection for Magnetic Resonance Images (MRIs) can be solved with unsupervised methods.
We have proposed a slice-wise semi-supervised method for tumour detection based on the computation of a dissimilarity function in the latent space of a Variational AutoEncoder.
We show that by training the models on higher resolution images and by improving the quality of the reconstructions, we obtain results which are comparable with different baselines.
arXiv Detail & Related papers (2020-07-24T14:02:09Z) - Semi-supervised Medical Image Classification with Relation-driven
Self-ensembling Model [71.80319052891817]
We present a relation-driven semi-supervised framework for medical image classification.
It exploits the unlabeled data by encouraging the prediction consistency of given input under perturbations.
Our method outperforms many state-of-the-art semi-supervised learning methods on both single-label and multi-label image classification scenarios.
arXiv Detail & Related papers (2020-05-15T06:57:54Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.