Inter-Domain Alignment for Predicting High-Resolution Brain Networks
Using Teacher-Student Learning
- URL: http://arxiv.org/abs/2110.03452v1
- Date: Wed, 6 Oct 2021 09:31:44 GMT
- Title: Inter-Domain Alignment for Predicting High-Resolution Brain Networks
Using Teacher-Student Learning
- Authors: Basar Demir, Alaa Bessadok, and Islem Rekik
- Abstract summary: We propose Learn to SuperResolve Brain Graphs with Knowledge Distillation Network (L2S-KDnet) to superresolve brain graphs.
Our teacher network is a graph encoder-decoder that firstly learns the LR brain graph embeddings, and secondly learns how to align the resulting latent representations to the HR ground truth data distribution.
Next, our student network learns the knowledge of the aligned brain graphs as well as the topological structure of the predicted HR graphs transferred from the teacher.
- Score: 0.0
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Accurate and automated super-resolution image synthesis is highly desired
since it has the great potential to circumvent the need for acquiring high-cost
medical scans and a time-consuming preprocessing pipeline of neuroimaging data.
However, existing deep learning frameworks are solely designed to predict
high-resolution (HR) image from a low-resolution (LR) one, which limits their
generalization ability to brain graphs (i.e., connectomes). A small body of
works has focused on superresolving brain graphs where the goal is to predict a
HR graph from a single LR graph. Although promising, existing works mainly
focus on superresolving graphs belonging to the same domain (e.g., functional),
overlooking the domain fracture existing between multimodal brain data
distributions (e.g., morphological and structural). To this aim, we propose a
novel inter-domain adaptation framework namely, Learn to SuperResolve Brain
Graphs with Knowledge Distillation Network (L2S-KDnet), which adopts a
teacher-student paradigm to superresolve brain graphs. Our teacher network is a
graph encoder-decoder that firstly learns the LR brain graph embeddings, and
secondly learns how to align the resulting latent representations to the HR
ground truth data distribution using an adversarial regularization. Ultimately,
it decodes the HR graphs from the aligned embeddings. Next, our student network
learns the knowledge of the aligned brain graphs as well as the topological
structure of the predicted HR graphs transferred from the teacher. We further
leverage the decoder of the teacher to optimize the student network. L2S-KDnet
presents the first TS architecture tailored for brain graph super-resolution
synthesis that is based on inter-domain alignment. Our experimental results
demonstrate substantial performance gains over benchmark methods.
Related papers
- Strongly Topology-preserving GNNs for Brain Graph Super-resolution [5.563171090433323]
Brain graph super-resolution (SR) is an under-explored yet highly relevant task in network neuroscience.
Current SR methods leverage graph neural networks (GNNs) thanks to their ability to handle graph-structured datasets.
We develop an efficient mapping from the edge space of our low-resolution (LR) brain graphs to the node space of a high-resolution (HR) dual graph.
arXiv Detail & Related papers (2024-11-01T03:29:04Z) - Graph Neural Networks for Brain Graph Learning: A Survey [53.74244221027981]
Graph neural networks (GNNs) have demonstrated a significant advantage in mining graph-structured data.
GNNs to learn brain graph representations for brain disorder analysis has recently gained increasing attention.
In this paper, we aim to bridge this gap by reviewing brain graph learning works that utilize GNNs.
arXiv Detail & Related papers (2024-06-01T02:47:39Z) - Learning Graph Structure from Convolutional Mixtures [119.45320143101381]
We propose a graph convolutional relationship between the observed and latent graphs, and formulate the graph learning task as a network inverse (deconvolution) problem.
In lieu of eigendecomposition-based spectral methods, we unroll and truncate proximal gradient iterations to arrive at a parameterized neural network architecture that we call a Graph Deconvolution Network (GDN)
GDNs can learn a distribution of graphs in a supervised fashion, perform link prediction or edge-weight regression tasks by adapting the loss function, and they are inherently inductive.
arXiv Detail & Related papers (2022-05-19T14:08:15Z) - A Few-shot Learning Graph Multi-Trajectory Evolution Network for
Forecasting Multimodal Baby Connectivity Development from a Baseline
Timepoint [53.73316520733503]
We propose a Graph Multi-Trajectory Evolution Network (GmTE-Net), which adopts a teacher-student paradigm.
This is the first teacher-student architecture tailored for brain graph multi-trajectory growth prediction.
arXiv Detail & Related papers (2021-10-06T08:26:57Z) - Brain Multigraph Prediction using Topology-Aware Adversarial Graph
Neural Network [1.6114012813668934]
We introduce topoGAN architecture, which jointly predicts multiple brain graphs from a single brain graph.
Our three key innovations are: (i) designing a novel graph adversarial auto-encoder for predicting multiple brain graphs from a single one, (ii) clustering the encoded source graphs in order to handle the mode collapse issue of GAN and (iii) introducing a topological loss to force the prediction of topologically sound target brain graphs.
arXiv Detail & Related papers (2021-05-06T10:20:45Z) - Brain Graph Super-Resolution Using Adversarial Graph Neural Network with
Application to Functional Brain Connectivity [0.0]
We propose the first-ever deep graph super-resolution (GSR) framework that attempts to automatically generate high-resolution (HR) brain graphs.
Our proposed AGSR-Net framework outperformed its variants for predicting high-resolution functional brain graphs from low-resolution ones.
arXiv Detail & Related papers (2021-05-02T09:09:56Z) - GSR-Net: Graph Super-Resolution Network for Predicting High-Resolution
from Low-Resolution Functional Brain Connectomes [0.0]
We introduce GSR-Net, the first super-resolution framework operating on graph-structured data that generates high-resolution brain graphs from low-resolution graphs.
First, we adopt a U-Net like architecture based on graph convolution, pooling and unpooling operations specific to non-Euclidean data.
Second, inspired by spectral theory, we break the symmetry of the U-Net architecture by topping it up with a graph super-resolution layer and two graph convolutional network layers to predict a HR graph.
arXiv Detail & Related papers (2020-09-23T12:02:55Z) - Topology-Aware Generative Adversarial Network for Joint Prediction of
Multiple Brain Graphs from a Single Brain Graph [1.2891210250935146]
We introduce MultiGraphGAN architecture, which predicts multiple brain graphs from a single brain graph.
Its three core contributions lie in: (i) designing a graph adversarial auto-encoder for jointly predicting brain graphs from a single one, (ii) handling the mode collapse problem of GAN by clustering the encoded source graphs and proposing a cluster-specific decoder.
arXiv Detail & Related papers (2020-09-23T11:23:08Z) - Towards Deeper Graph Neural Networks [63.46470695525957]
Graph convolutions perform neighborhood aggregation and represent one of the most important graph operations.
Several recent studies attribute this performance deterioration to the over-smoothing issue.
We propose Deep Adaptive Graph Neural Network (DAGNN) to adaptively incorporate information from large receptive fields.
arXiv Detail & Related papers (2020-07-18T01:11:14Z) - GCC: Graph Contrastive Coding for Graph Neural Network Pre-Training [62.73470368851127]
Graph representation learning has emerged as a powerful technique for addressing real-world problems.
We design Graph Contrastive Coding -- a self-supervised graph neural network pre-training framework.
We conduct experiments on three graph learning tasks and ten graph datasets.
arXiv Detail & Related papers (2020-06-17T16:18:35Z) - Geometrically Principled Connections in Graph Neural Networks [66.51286736506658]
We argue geometry should remain the primary driving force behind innovation in the emerging field of geometric deep learning.
We relate graph neural networks to widely successful computer graphics and data approximation models: radial basis functions (RBFs)
We introduce affine skip connections, a novel building block formed by combining a fully connected layer with any graph convolution operator.
arXiv Detail & Related papers (2020-04-06T13:25:46Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.