Predicting Brain Multigraph Population From a Single Graph Template for
Boosting One-Shot Classification
- URL: http://arxiv.org/abs/2209.06005v1
- Date: Tue, 13 Sep 2022 13:51:44 GMT
- Title: Predicting Brain Multigraph Population From a Single Graph Template for
Boosting One-Shot Classification
- Authors: Furkan Pala and Islem Rekik
- Abstract summary: A central challenge in training one-shot learning models is the limited representativeness of the available shots of the data space.
We propose a hybrid graph neural network (GNN) architecture, namely Multigraph Generator Network or briefly MultigraphGNet.
Our framework can shed some light on the future research of multigraph augmentation from a single graph.
- Score: 0.0
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: A central challenge in training one-shot learning models is the limited
representativeness of the available shots of the data space. Particularly in
the field of network neuroscience where the brain is represented as a graph,
such models may lead to low performance when classifying brain states (e.g.,
typical vs. autistic). To cope with this, most of the existing works involve a
data augmentation step to increase the size of the training set, its diversity
and representativeness. Though effective, such augmentation methods are limited
to generating samples with the same size as the input shots (e.g., generating
brain connectivity matrices from a single shot matrix). To the best of our
knowledge, the problem of generating brain multigraphs capturing multiple types
of connectivity between pairs of nodes (i.e., anatomical regions) from a single
brain graph remains unsolved. In this paper, we unprecedentedly propose a
hybrid graph neural network (GNN) architecture, namely Multigraph Generator
Network or briefly MultigraphGNet, comprising two subnetworks: (1) a
many-to-one GNN which integrates an input population of brain multigraphs into
a single template graph, namely a connectional brain temple (CBT), and (2) a
reverse one-to-many U-Net network which takes the learned CBT in each training
step and outputs the reconstructed input multigraph population. Both networks
are trained in an end-to-end way using a cyclic loss. Experimental results
demonstrate that our MultigraphGNet boosts the performance of an independent
classifier when trained on the augmented brain multigraphs in comparison with
training on a single CBT from each class. We hope that our framework can shed
some light on the future research of multigraph augmentation from a single
graph. Our MultigraphGNet source code is available at
https://github.com/basiralab/MultigraphGNet.
Related papers
- SimTeG: A Frustratingly Simple Approach Improves Textual Graph Learning [131.04781590452308]
We present SimTeG, a frustratingly Simple approach for Textual Graph learning.
We first perform supervised parameter-efficient fine-tuning (PEFT) on a pre-trained LM on the downstream task.
We then generate node embeddings using the last hidden states of finetuned LM.
arXiv Detail & Related papers (2023-08-03T07:00:04Z) - Multi-Head Graph Convolutional Network for Structural Connectome
Classification [8.658134276685404]
We propose a machine-learning model inspired by graph convolutional networks (GCNs)
The proposed network is a simple design that employs different heads involving graph convolutions focused on edges and nodes.
To test the ability of our model to extract complementary and representative features from brain connectivity data, we chose the task of sex classification.
arXiv Detail & Related papers (2023-05-02T15:04:30Z) - Dynamic Graph Message Passing Networks for Visual Recognition [112.49513303433606]
Modelling long-range dependencies is critical for scene understanding tasks in computer vision.
A fully-connected graph is beneficial for such modelling, but its computational overhead is prohibitive.
We propose a dynamic graph message passing network, that significantly reduces the computational complexity.
arXiv Detail & Related papers (2022-09-20T14:41:37Z) - Neural Graph Matching for Pre-training Graph Neural Networks [72.32801428070749]
Graph neural networks (GNNs) have been shown powerful capacity at modeling structural data.
We present a novel Graph Matching based GNN Pre-Training framework, called GMPT.
The proposed method can be applied to fully self-supervised pre-training and coarse-grained supervised pre-training.
arXiv Detail & Related papers (2022-03-03T09:53:53Z) - MGAE: Masked Autoencoders for Self-Supervised Learning on Graphs [55.66953093401889]
Masked graph autoencoder (MGAE) framework to perform effective learning on graph structure data.
Taking insights from self-supervised learning, we randomly mask a large proportion of edges and try to reconstruct these missing edges during training.
arXiv Detail & Related papers (2022-01-07T16:48:07Z) - Multi network InfoMax: A pre-training method involving graph
convolutional networks [0.0]
This paper presents a pre-training method involving graph convolutional/neural networks (GCNs/GNNs)
The learned high-level graph latent representations help increase performance for downstream graph classification tasks.
We apply our method to a neuroimaging dataset for classifying subjects into healthy control (HC) and schizophrenia (SZ) groups.
arXiv Detail & Related papers (2021-11-01T21:53:20Z) - One Representative-Shot Learning Using a Population-Driven Template with
Application to Brain Connectivity Classification and Evolution Prediction [0.0]
Graph neural networks (GNNs) have been introduced to the field of network neuroscience.
We take a very different approach in training GNNs, where we aim to learn with one sample and achieve the best performance.
We present the first one-shot paradigm where a GNN is trained on a single population-driven template.
arXiv Detail & Related papers (2021-10-06T08:36:00Z) - A Few-shot Learning Graph Multi-Trajectory Evolution Network for
Forecasting Multimodal Baby Connectivity Development from a Baseline
Timepoint [53.73316520733503]
We propose a Graph Multi-Trajectory Evolution Network (GmTE-Net), which adopts a teacher-student paradigm.
This is the first teacher-student architecture tailored for brain graph multi-trajectory growth prediction.
arXiv Detail & Related papers (2021-10-06T08:26:57Z) - Learning Graph Neural Networks with Positive and Unlabeled Nodes [34.903471348798725]
Graph neural networks (GNNs) are important tools for transductive learning tasks, such as node classification in graphs.
Most GNN models aggregate information from short distances in each round, and fail to capture long distance relationship in graphs.
In this paper, we propose a novel graph neural network framework, long-short distance aggregation networks (LSDAN) to overcome these limitations.
arXiv Detail & Related papers (2021-03-08T11:43:37Z) - Foreseeing Brain Graph Evolution Over Time Using Deep Adversarial
Network Normalizer [0.0]
We propose the first graph-based Generative Adversarial Network (gGAN) that learns how to normalize brain graphs.
Our proposed method achieved the lowest brain disease evolution prediction error using a single baseline timepoint.
arXiv Detail & Related papers (2020-09-23T14:25:40Z) - GCC: Graph Contrastive Coding for Graph Neural Network Pre-Training [62.73470368851127]
Graph representation learning has emerged as a powerful technique for addressing real-world problems.
We design Graph Contrastive Coding -- a self-supervised graph neural network pre-training framework.
We conduct experiments on three graph learning tasks and ten graph datasets.
arXiv Detail & Related papers (2020-06-17T16:18:35Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.