Multi-View Brain HyperConnectome AutoEncoder For Brain State
Classification
- URL: http://arxiv.org/abs/2009.11553v1
- Date: Thu, 24 Sep 2020 08:51:44 GMT
- Title: Multi-View Brain HyperConnectome AutoEncoder For Brain State
Classification
- Authors: Alin Banka, Inis Buzi and Islem Rekik
- Abstract summary: We propose a new strategy to build a hyperconnectome for each brain view based on nearest neighbour algorithm.
We also design a hyperconnectome autoencoder framework which operates directly on the multi-view hyperconnectomes.
Our experiments showed that the learned embeddings by HCAE yield to better results for brain state classification.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Graph embedding is a powerful method to represent graph neurological data
(e.g., brain connectomes) in a low dimensional space for brain connectivity
mapping, prediction and classification. However, existing embedding algorithms
have two major limitations. First, they primarily focus on preserving
one-to-one topological relationships between nodes (i.e., regions of interest
(ROIs) in a connectome), but they have mostly ignored many-to-many
relationships (i.e., set to set), which can be captured using a hyperconnectome
structure. Second, existing graph embedding techniques cannot be easily adapted
to multi-view graph data with heterogeneous distributions. In this paper, while
cross-pollinating adversarial deep learning with hypergraph theory, we aim to
jointly learn deep latent embeddings of subject0specific multi-view brain
graphs to eventually disentangle different brain states. First, we propose a
new simple strategy to build a hyperconnectome for each brain view based on
nearest neighbour algorithm to preserve the connectivities across pairs of
ROIs. Second, we design a hyperconnectome autoencoder (HCAE) framework which
operates directly on the multi-view hyperconnectomes based on hypergraph
convolutional layers to better capture the many-to-many relationships between
brain regions (i.e., nodes). For each subject, we further regularize the
hypergraph autoencoding by adversarial regularization to align the distribution
of the learned hyperconnectome embeddings with that of the input
hyperconnectomes. We formalize our hyperconnectome embedding within a geometric
deep learning framework to optimize for a given subject, thereby designing an
individual-based learning framework. Our experiments showed that the learned
embeddings by HCAE yield to better results for brain state classification
compared with other deep graph embedding methods methods.
Related papers
- Strongly Topology-preserving GNNs for Brain Graph Super-resolution [5.563171090433323]
Brain graph super-resolution (SR) is an under-explored yet highly relevant task in network neuroscience.
Current SR methods leverage graph neural networks (GNNs) thanks to their ability to handle graph-structured datasets.
We develop an efficient mapping from the edge space of our low-resolution (LR) brain graphs to the node space of a high-resolution (HR) dual graph.
arXiv Detail & Related papers (2024-11-01T03:29:04Z) - Self-Supervised Graph Representation Learning for Neuronal Morphologies [75.38832711445421]
We present GraphDINO, a data-driven approach to learn low-dimensional representations of 3D neuronal morphologies from unlabeled datasets.
We show, in two different species and across multiple brain areas, that this method yields morphological cell type clusterings on par with manual feature-based classification by experts.
Our method could potentially enable data-driven discovery of novel morphological features and cell types in large-scale datasets.
arXiv Detail & Related papers (2021-12-23T12:17:47Z) - Dist2Cycle: A Simplicial Neural Network for Homology Localization [66.15805004725809]
Simplicial complexes can be viewed as high dimensional generalizations of graphs that explicitly encode multi-way ordered relations.
We propose a graph convolutional model for learning functions parametrized by the $k$-homological features of simplicial complexes.
arXiv Detail & Related papers (2021-10-28T14:59:41Z) - Inter-Domain Alignment for Predicting High-Resolution Brain Networks
Using Teacher-Student Learning [0.0]
We propose Learn to SuperResolve Brain Graphs with Knowledge Distillation Network (L2S-KDnet) to superresolve brain graphs.
Our teacher network is a graph encoder-decoder that firstly learns the LR brain graph embeddings, and secondly learns how to align the resulting latent representations to the HR ground truth data distribution.
Next, our student network learns the knowledge of the aligned brain graphs as well as the topological structure of the predicted HR graphs transferred from the teacher.
arXiv Detail & Related papers (2021-10-06T09:31:44Z) - Brain Multigraph Prediction using Topology-Aware Adversarial Graph
Neural Network [1.6114012813668934]
We introduce topoGAN architecture, which jointly predicts multiple brain graphs from a single brain graph.
Our three key innovations are: (i) designing a novel graph adversarial auto-encoder for predicting multiple brain graphs from a single one, (ii) clustering the encoded source graphs in order to handle the mode collapse issue of GAN and (iii) introducing a topological loss to force the prediction of topologically sound target brain graphs.
arXiv Detail & Related papers (2021-05-06T10:20:45Z) - Reinforced Neighborhood Selection Guided Multi-Relational Graph Neural
Networks [68.9026534589483]
RioGNN is a novel Reinforced, recursive and flexible neighborhood selection guided multi-relational Graph Neural Network architecture.
RioGNN can learn more discriminative node embedding with enhanced explainability due to the recognition of individual importance of each relation.
arXiv Detail & Related papers (2021-04-16T04:30:06Z) - Deep Graph Normalizer: A Geometric Deep Learning Approach for Estimating
Connectional Brain Templates [0.0]
A connectional brain template (CBT) is a normalized graph-based representation of a population of brain networks.
Deep Graph Normalizer (DGN) is the first geometric deep learning architecture for normalizing a population of MVBNs.
DGN learns how to fuse multi-view brain networks while capturing non-linear patterns across subjects.
arXiv Detail & Related papers (2020-12-28T08:01:49Z) - Distance-aware Molecule Graph Attention Network for Drug-Target Binding
Affinity Prediction [54.93890176891602]
We propose a diStance-aware Molecule graph Attention Network (S-MAN) tailored to drug-target binding affinity prediction.
As a dedicated solution, we first propose a position encoding mechanism to integrate the topological structure and spatial position information into the constructed pocket-ligand graph.
We also propose a novel edge-node hierarchical attentive aggregation structure which has edge-level aggregation and node-level aggregation.
arXiv Detail & Related papers (2020-12-17T17:44:01Z) - Deep Hypergraph U-Net for Brain Graph Embedding and Classification [0.0]
Network neuroscience examines the brain as a system represented by a network (or connectome)
We propose Hypergraph U-Net, a novel data embedding framework leveraging the hypergraph structure to learn low-dimensional embeddings of data samples.
We tested our method on small-scale and large-scale heterogeneous brain connectomic datasets including morphological and functional brain networks of autistic and demented patients.
arXiv Detail & Related papers (2020-08-30T08:15:18Z) - Geometrically Principled Connections in Graph Neural Networks [66.51286736506658]
We argue geometry should remain the primary driving force behind innovation in the emerging field of geometric deep learning.
We relate graph neural networks to widely successful computer graphics and data approximation models: radial basis functions (RBFs)
We introduce affine skip connections, a novel building block formed by combining a fully connected layer with any graph convolution operator.
arXiv Detail & Related papers (2020-04-06T13:25:46Z) - Graph Representation Learning via Graphical Mutual Information
Maximization [86.32278001019854]
We propose a novel concept, Graphical Mutual Information (GMI), to measure the correlation between input graphs and high-level hidden representations.
We develop an unsupervised learning model trained by maximizing GMI between the input and output of a graph neural encoder.
arXiv Detail & Related papers (2020-02-04T08:33:49Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.