Invertible Neural Networks for Graph Prediction
- URL: http://arxiv.org/abs/2206.01163v1
- Date: Thu, 2 Jun 2022 17:28:33 GMT
- Title: Invertible Neural Networks for Graph Prediction
- Authors: Chen Xu, Xiuyuan Cheng, Yao Xie
- Abstract summary: In this work, we address conditional generation using deep invertible neural networks.
We adopt an end-to-end training approach since our objective is to address prediction and generation in the forward and backward processes at once.
- Score: 22.140275054568985
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: In this work, we address conditional generation using deep invertible neural
networks. This is a type of problem where one aims to infer the most probable
inputs $X$ given outcomes $Y$. We call our method \textit{invertible graph
neural network} (iGNN) due to the primary focus on generating node features on
graph data. A notable feature of our proposed methods is that during network
training, we revise the typically-used loss objective in normalizing flow and
consider Wasserstein-2 regularization to facilitate the training process.
Algorithmic-wise, we adopt an end-to-end training approach since our objective
is to address prediction and generation in the forward and backward processes
at once through a single model. Theoretically, we characterize the conditions
for identifiability of a true mapping, the existence and invertibility of the
mapping, and the expressiveness of iGNN in learning the mapping.
Experimentally, we verify the performance of iGNN on both simulated and
real-data datasets. We demonstrate through extensive numerical experiments that
iGNN shows clear improvement over competing conditional generation benchmarks
on high-dimensional and/or non-convex data.
Related papers
- Reducing Oversmoothing through Informed Weight Initialization in Graph Neural Networks [16.745718346575202]
We propose a new scheme (G-Init) that reduces oversmoothing, leading to very good results in node and graph classification tasks.
Our results indicate that the new method (G-Init) reduces oversmoothing in deep GNNs, facilitating their effective use.
arXiv Detail & Related papers (2024-10-31T11:21:20Z) - On the Initialization of Graph Neural Networks [10.153841274798829]
We analyze the variance of forward and backward propagation across Graph Neural Networks layers.
We propose a new method for Variance Instability Reduction within GNN Optimization (Virgo)
We conduct comprehensive experiments on 15 datasets to show that Virgo can lead to superior model performance.
arXiv Detail & Related papers (2023-12-05T09:55:49Z) - Efficient Heterogeneous Graph Learning via Random Projection [58.4138636866903]
Heterogeneous Graph Neural Networks (HGNNs) are powerful tools for deep learning on heterogeneous graphs.
Recent pre-computation-based HGNNs use one-time message passing to transform a heterogeneous graph into regular-shaped tensors.
We propose a hybrid pre-computation-based HGNN, named Random Projection Heterogeneous Graph Neural Network (RpHGNN)
arXiv Detail & Related papers (2023-10-23T01:25:44Z) - Towards Better Out-of-Distribution Generalization of Neural Algorithmic
Reasoning Tasks [51.8723187709964]
We study the OOD generalization of neural algorithmic reasoning tasks.
The goal is to learn an algorithm from input-output pairs using deep neural networks.
arXiv Detail & Related papers (2022-11-01T18:33:20Z) - Neural Graph Matching for Pre-training Graph Neural Networks [72.32801428070749]
Graph neural networks (GNNs) have been shown powerful capacity at modeling structural data.
We present a novel Graph Matching based GNN Pre-Training framework, called GMPT.
The proposed method can be applied to fully self-supervised pre-training and coarse-grained supervised pre-training.
arXiv Detail & Related papers (2022-03-03T09:53:53Z) - Variational models for signal processing with Graph Neural Networks [3.5939555573102853]
This paper is devoted to signal processing on point-clouds by means of neural networks.
In this work, we investigate the use of variational models for such Graph Neural Networks to process signals on graphs for unsupervised learning.
arXiv Detail & Related papers (2021-03-30T13:31:11Z) - Binary Graph Neural Networks [69.51765073772226]
Graph Neural Networks (GNNs) have emerged as a powerful and flexible framework for representation learning on irregular data.
In this paper, we present and evaluate different strategies for the binarization of graph neural networks.
We show that through careful design of the models, and control of the training process, binary graph neural networks can be trained at only a moderate cost in accuracy on challenging benchmarks.
arXiv Detail & Related papers (2020-12-31T18:48:58Z) - Fast Learning of Graph Neural Networks with Guaranteed Generalizability:
One-hidden-layer Case [93.37576644429578]
Graph neural networks (GNNs) have made great progress recently on learning from graph-structured data in practice.
We provide a theoretically-grounded generalizability analysis of GNNs with one hidden layer for both regression and binary classification problems.
arXiv Detail & Related papers (2020-06-25T00:45:52Z) - Binarized Graph Neural Network [65.20589262811677]
We develop a binarized graph neural network to learn the binary representations of the nodes with binary network parameters.
Our proposed method can be seamlessly integrated into the existing GNN-based embedding approaches.
Experiments indicate that the proposed binarized graph neural network, namely BGN, is orders of magnitude more efficient in terms of both time and space.
arXiv Detail & Related papers (2020-04-19T09:43:14Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.