One Representative-Shot Learning Using a Population-Driven Template with
Application to Brain Connectivity Classification and Evolution Prediction
- URL: http://arxiv.org/abs/2110.11238v1
- Date: Wed, 6 Oct 2021 08:36:00 GMT
- Title: One Representative-Shot Learning Using a Population-Driven Template with
Application to Brain Connectivity Classification and Evolution Prediction
- Authors: Umut Guvercin, Mohammed Amine Gharsallaoui and Islem Rekik
- Abstract summary: Graph neural networks (GNNs) have been introduced to the field of network neuroscience.
We take a very different approach in training GNNs, where we aim to learn with one sample and achieve the best performance.
We present the first one-shot paradigm where a GNN is trained on a single population-driven template.
- Score: 0.0
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Few-shot learning presents a challenging paradigm for training discriminative
models on a few training samples representing the target classes to
discriminate. However, classification methods based on deep learning are
ill-suited for such learning as they need large amounts of training data --let
alone one-shot learning. Recently, graph neural networks (GNNs) have been
introduced to the field of network neuroscience, where the brain connectivity
is encoded in a graph. However, with scarce neuroimaging datasets particularly
for rare diseases and low-resource clinical facilities, such data-devouring
architectures might fail in learning the target task. In this paper, we take a
very different approach in training GNNs, where we aim to learn with one sample
and achieve the best performance --a formidable challenge to tackle.
Specifically, we present the first one-shot paradigm where a GNN is trained on
a single population-driven template --namely a connectional brain template
(CBT). A CBT is a compact representation of a population of brain graphs
capturing the unique connectivity patterns shared across individuals. It is
analogous to brain image atlases for neuroimaging datasets. Using a
one-representative CBT as a training sample, we alleviate the training load of
GNN models while boosting their performance across a variety of classification
and regression tasks. We demonstrate that our method significantly outperformed
benchmark one-shot learning methods with downstream classification and
time-dependent brain graph data forecasting tasks while competing with the
train-on-all conventional training strategy. Our source code can be found at
https://github.com/basiralab/one-representative-shot-learning.
Related papers
- Training Better Deep Learning Models Using Human Saliency [11.295653130022156]
This work explores how human judgement about salient regions of an image can be introduced into deep convolutional neural network (DCNN) training.
We propose a new component of the loss function that ConveYs Brain Oversight to Raise Generalization (CYBORG) and penalizes the model for using non-salient regions.
arXiv Detail & Related papers (2024-10-21T16:52:44Z) - BEND: Bagging Deep Learning Training Based on Efficient Neural Network Diffusion [56.9358325168226]
We propose a Bagging deep learning training algorithm based on Efficient Neural network Diffusion (BEND)
Our approach is simple but effective, first using multiple trained model weights and biases as inputs to train autoencoder and latent diffusion model.
Our proposed BEND algorithm can consistently outperform the mean and median accuracies of both the original trained model and the diffused model.
arXiv Detail & Related papers (2024-03-23T08:40:38Z) - Label Deconvolution for Node Representation Learning on Large-scale
Attributed Graphs against Learning Bias [75.44877675117749]
We propose an efficient label regularization technique, namely Label Deconvolution (LD), to alleviate the learning bias by a novel and highly scalable approximation to the inverse mapping of GNNs.
Experiments demonstrate LD significantly outperforms state-of-the-art methods on Open Graph datasets Benchmark.
arXiv Detail & Related papers (2023-09-26T13:09:43Z) - Population Template-Based Brain Graph Augmentation for Improving
One-Shot Learning Classification [0.0]
One-shot learning is one of the most challenging and trending concepts of deep learning.
We benchmarked our proposed solution on AD/LMCI dataset consisting of brain connectomes with Alzheimer's Disease.
Our results on classification not only provided better accuracy when augmented data generated from one sample is introduced, but yields more balanced results on other metrics as well.
arXiv Detail & Related papers (2022-12-14T14:56:00Z) - Neural networks trained with SGD learn distributions of increasing
complexity [78.30235086565388]
We show that neural networks trained using gradient descent initially classify their inputs using lower-order input statistics.
We then exploit higher-order statistics only later during training.
We discuss the relation of DSB to other simplicity biases and consider its implications for the principle of universality in learning.
arXiv Detail & Related papers (2022-11-21T15:27:22Z) - Data-driven emergence of convolutional structure in neural networks [83.4920717252233]
We show how fully-connected neural networks solving a discrimination task can learn a convolutional structure directly from their inputs.
By carefully designing data models, we show that the emergence of this pattern is triggered by the non-Gaussian, higher-order local structure of the inputs.
arXiv Detail & Related papers (2022-02-01T17:11:13Z) - Multi network InfoMax: A pre-training method involving graph
convolutional networks [0.0]
This paper presents a pre-training method involving graph convolutional/neural networks (GCNs/GNNs)
The learned high-level graph latent representations help increase performance for downstream graph classification tasks.
We apply our method to a neuroimaging dataset for classifying subjects into healthy control (HC) and schizophrenia (SZ) groups.
arXiv Detail & Related papers (2021-11-01T21:53:20Z) - A Few-shot Learning Graph Multi-Trajectory Evolution Network for
Forecasting Multimodal Baby Connectivity Development from a Baseline
Timepoint [53.73316520733503]
We propose a Graph Multi-Trajectory Evolution Network (GmTE-Net), which adopts a teacher-student paradigm.
This is the first teacher-student architecture tailored for brain graph multi-trajectory growth prediction.
arXiv Detail & Related papers (2021-10-06T08:26:57Z) - Dynamic Neural Diversification: Path to Computationally Sustainable
Neural Networks [68.8204255655161]
Small neural networks with a constrained number of trainable parameters, can be suitable resource-efficient candidates for many simple tasks.
We explore the diversity of the neurons within the hidden layer during the learning process.
We analyze how the diversity of the neurons affects predictions of the model.
arXiv Detail & Related papers (2021-09-20T15:12:16Z) - Predicting cognitive scores with graph neural networks through sample
selection learning [0.0]
Functional brain connectomes are used to predict cognitive measures such as intelligence quotient (IQ) scores.
We design a novel regression GNN model (namely RegGNN) for predicting IQ scores from brain connectivity.
We also propose a emphlearning-based sample selection method that learns how to choose the training samples with the highest expected predictive power.
arXiv Detail & Related papers (2021-06-17T11:45:39Z) - Biologically-Motivated Deep Learning Method using Hierarchical
Competitive Learning [0.0]
I propose to introduce unsupervised competitive learning which only requires forward propagating signals as a pre-training method for CNNs.
The proposed method could be useful for a variety of poorly labeled data, for example, time series or medical data.
arXiv Detail & Related papers (2020-01-04T20:07:36Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.