StairwayGraphNet for Inter- and Intra-modality Multi-resolution Brain
Graph Alignment and Synthesis
- URL: http://arxiv.org/abs/2110.04279v1
- Date: Wed, 6 Oct 2021 09:49:38 GMT
- Title: StairwayGraphNet for Inter- and Intra-modality Multi-resolution Brain
Graph Alignment and Synthesis
- Authors: Islem Mhiri, Mohamed Ali Mahjoub and Islem Rekik
- Abstract summary: We propose a multi-resolution StairwayGraphNet (SG-Net) framework to infer a target graph modality based on a given modality and super-resolve brain graphs in both inter and intra domains.
Our SG-Net is grounded in three main contributions: (i) predicting a target graph from a source one based on a novel graph generative adversarial network in both inter and intra domains, (ii) generating high-resolution brain graphs without resorting to the time consuming and expensive MRI processing steps, and (iii) enforcing the source distribution to match that of the ground truth graphs.
- Score: 1.6114012813668934
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Synthesizing multimodality medical data provides complementary knowledge and
helps doctors make precise clinical decisions. Although promising, existing
multimodal brain graph synthesis frameworks have several limitations. First,
they mainly tackle only one problem (intra- or inter-modality), limiting their
generalizability to synthesizing inter- and intra-modality simultaneously.
Second, while few techniques work on super-resolving low-resolution brain
graphs within a single modality (i.e., intra), inter-modality graph
super-resolution remains unexplored though this would avoid the need for costly
data collection and processing. More importantly, both target and source
domains might have different distributions, which causes a domain fracture
between them. To fill these gaps, we propose a multi-resolution
StairwayGraphNet (SG-Net) framework to jointly infer a target graph modality
based on a given modality and super-resolve brain graphs in both inter and
intra domains. Our SG-Net is grounded in three main contributions: (i)
predicting a target graph from a source one based on a novel graph generative
adversarial network in both inter (e.g., morphological-functional) and intra
(e.g., functional-functional) domains, (ii) generating high-resolution brain
graphs without resorting to the time consuming and expensive MRI processing
steps, and (iii) enforcing the source distribution to match that of the ground
truth graphs using an inter-modality aligner to relax the loss function to
optimize. Moreover, we design a new Ground Truth-Preserving loss function to
guide both generators in learning the topological structure of ground truth
brain graphs more accurately. Our comprehensive experiments on predicting
target brain graphs from source graphs using a multi-resolution stairway showed
the outperformance of our method in comparison with its variants and
state-of-the-art method.
Related papers
- MM-GTUNets: Unified Multi-Modal Graph Deep Learning for Brain Disorders Prediction [8.592259720470697]
We propose MM-GTUNets, an end-to-end graph transformer based multi-modal graph deep learning framework for brain disorders prediction.
We introduce Modality Reward Representation Learning (MRRL) which adaptively constructs population graphs using a reward system.
We also propose Adaptive Cross-Modal Graph Learning (ACMGL), which captures critical modality-specific and modality-shared features.
arXiv Detail & Related papers (2024-06-20T16:14:43Z) - Predicting Infant Brain Connectivity with Federated Multi-Trajectory
GNNs using Scarce Data [54.55126643084341]
Existing deep learning solutions suffer from three major limitations.
We introduce FedGmTE-Net++, a federated graph-based multi-trajectory evolution network.
Using the power of federation, we aggregate local learnings among diverse hospitals with limited datasets.
arXiv Detail & Related papers (2024-01-01T10:20:01Z) - Learning Strong Graph Neural Networks with Weak Information [64.64996100343602]
We develop a principled approach to the problem of graph learning with weak information (GLWI)
We propose D$2$PT, a dual-channel GNN framework that performs long-range information propagation on the input graph with incomplete structure, but also on a global graph that encodes global semantic similarities.
arXiv Detail & Related papers (2023-05-29T04:51:09Z) - MGNNI: Multiscale Graph Neural Networks with Implicit Layers [53.75421430520501]
implicit graph neural networks (GNNs) have been proposed to capture long-range dependencies in underlying graphs.
We introduce and justify two weaknesses of implicit GNNs: the constrained expressiveness due to their limited effective range for capturing long-range dependencies, and their lack of ability to capture multiscale information on graphs at multiple resolutions.
We propose a multiscale graph neural network with implicit layers (MGNNI) which is able to model multiscale structures on graphs and has an expanded effective range for capturing long-range dependencies.
arXiv Detail & Related papers (2022-10-15T18:18:55Z) - Counterfactual Intervention Feature Transfer for Visible-Infrared Person
Re-identification [69.45543438974963]
We find graph-based methods in the visible-infrared person re-identification task (VI-ReID) suffer from bad generalization because of two issues.
The well-trained input features weaken the learning of graph topology, making it not generalized enough during the inference process.
We propose a Counterfactual Intervention Feature Transfer (CIFT) method to tackle these problems.
arXiv Detail & Related papers (2022-08-01T16:15:31Z) - Inter-Domain Alignment for Predicting High-Resolution Brain Networks
Using Teacher-Student Learning [0.0]
We propose Learn to SuperResolve Brain Graphs with Knowledge Distillation Network (L2S-KDnet) to superresolve brain graphs.
Our teacher network is a graph encoder-decoder that firstly learns the LR brain graph embeddings, and secondly learns how to align the resulting latent representations to the HR ground truth data distribution.
Next, our student network learns the knowledge of the aligned brain graphs as well as the topological structure of the predicted HR graphs transferred from the teacher.
arXiv Detail & Related papers (2021-10-06T09:31:44Z) - Learning Multi-Granular Spatio-Temporal Graph Network for Skeleton-based
Action Recognition [49.163326827954656]
We propose a novel multi-granular-temporal graph network for skeleton-based action classification.
We develop a dual-head graph network consisting of two inter-leaved branches, which enables us to extract at least two-temporal resolutions.
We conduct extensive experiments on three large-scale datasets.
arXiv Detail & Related papers (2021-08-10T09:25:07Z) - Non-isomorphic Inter-modality Graph Alignment and Synthesis for Holistic
Brain Mapping [1.433758865948252]
We propose an inter-modality aligner of non-isomorphic graphs (IMANGraphNet) framework to infer a target graph modality based on a given modality.
Our three core contributions lie in (i) predicting a target graph (e.g., functional) from a source graph (e.g., morphological) based on a novel graph generative adversarial network (gGAN)
Our comprehensive experiments on predicting functional from morphological graphs demonstrate the outperformance of IMANGraphNet in comparison with its variants.
arXiv Detail & Related papers (2021-06-30T08:59:55Z) - Brain Multigraph Prediction using Topology-Aware Adversarial Graph
Neural Network [1.6114012813668934]
We introduce topoGAN architecture, which jointly predicts multiple brain graphs from a single brain graph.
Our three key innovations are: (i) designing a novel graph adversarial auto-encoder for predicting multiple brain graphs from a single one, (ii) clustering the encoded source graphs in order to handle the mode collapse issue of GAN and (iii) introducing a topological loss to force the prediction of topologically sound target brain graphs.
arXiv Detail & Related papers (2021-05-06T10:20:45Z) - Topology-Aware Generative Adversarial Network for Joint Prediction of
Multiple Brain Graphs from a Single Brain Graph [1.2891210250935146]
We introduce MultiGraphGAN architecture, which predicts multiple brain graphs from a single brain graph.
Its three core contributions lie in: (i) designing a graph adversarial auto-encoder for jointly predicting brain graphs from a single one, (ii) handling the mode collapse problem of GAN by clustering the encoded source graphs and proposing a cluster-specific decoder.
arXiv Detail & Related papers (2020-09-23T11:23:08Z) - Graph Representation Learning via Graphical Mutual Information
Maximization [86.32278001019854]
We propose a novel concept, Graphical Mutual Information (GMI), to measure the correlation between input graphs and high-level hidden representations.
We develop an unsupervised learning model trained by maximizing GMI between the input and output of a graph neural encoder.
arXiv Detail & Related papers (2020-02-04T08:33:49Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.