Residual Embedding Similarity-Based Network Selection for Predicting
Brain Network Evolution Trajectory from a Single Observation
- URL: http://arxiv.org/abs/2009.11110v1
- Date: Wed, 23 Sep 2020 12:40:04 GMT
- Title: Residual Embedding Similarity-Based Network Selection for Predicting
Brain Network Evolution Trajectory from a Single Observation
- Authors: Ahmet Serkan Goktas, Alaa Bessadok and Islem Rekik
- Abstract summary: We propose Residual Embedding Similarity-Based Network selection (RESNets) for predicting brain network evolution trajectory from a single timepoint.
Our experiments on both healthy and disordered brain networks demonstrate the success of our proposed method.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: While existing predictive frameworks are able to handle Euclidean structured
data (i.e, brain images), they might fail to generalize to geometric
non-Euclidean data such as brain networks. Besides, these are rooted the sample
selection step in using Euclidean or learned similarity measure between
vectorized training and testing brain networks. Such sample connectomic
representation might include irrelevant and redundant features that could
mislead the training sample selection step. Undoubtedly, this fails to exploit
and preserve the topology of the brain connectome. To overcome this major
drawback, we propose Residual Embedding Similarity-Based Network selection
(RESNets) for predicting brain network evolution trajectory from a single
timepoint. RESNets first learns a compact geometric embedding of each training
and testing sample using adversarial connectome embedding network. This nicely
reduces the high-dimensionality of brain networks while preserving their
topological properties via graph convolutional networks. Next, to compute the
similarity between subjects, we introduce the concept of a connectional brain
template (CBT), a fixed network reference, where we further represent each
training and testing network as a deviation from the reference CBT in the
embedding space. As such, we select the most similar training subjects to the
testing subject at baseline by comparing their learned residual embeddings with
respect to the pre-defined CBT. Once the best training samples are selected at
baseline, we simply average their corresponding brain networks at follow-up
timepoints to predict the evolution trajectory of the testing network. Our
experiments on both healthy and disordered brain networks demonstrate the
success of our proposed method in comparison to RESNets ablated versions and
traditional approaches.
Related papers
- Iterative self-transfer learning: A general methodology for response
time-history prediction based on small dataset [0.0]
An iterative self-transfer learningmethod for training neural networks based on small datasets is proposed in this study.
The results show that the proposed method can improve the model performance by near an order of magnitude on small datasets.
arXiv Detail & Related papers (2023-06-14T18:48:04Z) - BrainNPT: Pre-training of Transformer networks for brain network
classification [3.8906116457135966]
We proposed a Transformer-based neural network, named as BrainNPT, for brain functional network classification.
We proposed a pre-training framework for BrainNPT model to leverage unlabeled brain network data.
The results of classification experiments demonstrated the BrainNPT model without pre-training achieved the best performance.
arXiv Detail & Related papers (2023-05-02T13:01:59Z) - Neural Networks beyond explainability: Selective inference for sequence
motifs [5.620334754517149]
We introduce SEISM, a selective inference procedure to test the association between extracted features and the predicted phenotype.
We adapt existing sampling-based selective inference procedures by quantizing this selection over an infinite set to a large but finite grid.
We show that sampling under a specific choice of parameters is sufficient to characterize the composite null hypothesis typically used for selective inference.
arXiv Detail & Related papers (2022-12-23T10:49:07Z) - Quasi-orthogonality and intrinsic dimensions as measures of learning and
generalisation [55.80128181112308]
We show that dimensionality and quasi-orthogonality of neural networks' feature space may jointly serve as network's performance discriminants.
Our findings suggest important relationships between the networks' final performance and properties of their randomly initialised feature spaces.
arXiv Detail & Related papers (2022-03-30T21:47:32Z) - Data-driven emergence of convolutional structure in neural networks [83.4920717252233]
We show how fully-connected neural networks solving a discrimination task can learn a convolutional structure directly from their inputs.
By carefully designing data models, we show that the emergence of this pattern is triggered by the non-Gaussian, higher-order local structure of the inputs.
arXiv Detail & Related papers (2022-02-01T17:11:13Z) - A Comparative Study of Machine Learning Methods for Predicting the
Evolution of Brain Connectivity from a Baseline Timepoint [0.0]
Predicting the evolution of the brain network, also called connectome, makes it possible to spot connectivity-related neurological disorders in earlier stages.
We organized a Kaggle competition where 20 competing teams designed advanced machine learning pipelines for predicting the brain connectivity evolution from a single timepoint.
arXiv Detail & Related papers (2021-09-16T06:13:49Z) - Predicting cognitive scores with graph neural networks through sample
selection learning [0.0]
Functional brain connectomes are used to predict cognitive measures such as intelligence quotient (IQ) scores.
We design a novel regression GNN model (namely RegGNN) for predicting IQ scores from brain connectivity.
We also propose a emphlearning-based sample selection method that learns how to choose the training samples with the highest expected predictive power.
arXiv Detail & Related papers (2021-06-17T11:45:39Z) - PredRNN: A Recurrent Neural Network for Spatiotemporal Predictive
Learning [109.84770951839289]
We present PredRNN, a new recurrent network for learning visual dynamics from historical context.
We show that our approach obtains highly competitive results on three standard datasets.
arXiv Detail & Related papers (2021-03-17T08:28:30Z) - Anomaly Detection on Attributed Networks via Contrastive Self-Supervised
Learning [50.24174211654775]
We present a novel contrastive self-supervised learning framework for anomaly detection on attributed networks.
Our framework fully exploits the local information from network data by sampling a novel type of contrastive instance pair.
A graph neural network-based contrastive learning model is proposed to learn informative embedding from high-dimensional attributes and local structure.
arXiv Detail & Related papers (2021-02-27T03:17:20Z) - Learning Connectivity of Neural Networks from a Topological Perspective [80.35103711638548]
We propose a topological perspective to represent a network into a complete graph for analysis.
By assigning learnable parameters to the edges which reflect the magnitude of connections, the learning process can be performed in a differentiable manner.
This learning process is compatible with existing networks and owns adaptability to larger search spaces and different tasks.
arXiv Detail & Related papers (2020-08-19T04:53:31Z) - Context-Aware Refinement Network Incorporating Structural Connectivity
Prior for Brain Midline Delineation [50.868845400939314]
We propose a context-aware refinement network (CAR-Net) to refine and integrate the feature pyramid representation generated by the UNet.
For keeping the structural connectivity of the brain midline, we introduce a novel connectivity regular loss.
The proposed method requires fewer parameters and outperforms three state-of-the-art methods in terms of four evaluation metrics.
arXiv Detail & Related papers (2020-07-10T14:01:20Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.