Get Rid of Suspended Animation Problem: Deep Diffusive Neural Network on
Graph Semi-Supervised Classification
- URL: http://arxiv.org/abs/2001.07922v1
- Date: Wed, 22 Jan 2020 09:19:12 GMT
- Title: Get Rid of Suspended Animation Problem: Deep Diffusive Neural Network on
Graph Semi-Supervised Classification
- Authors: Jiawei Zhang
- Abstract summary: We propose a new graph neural network, namely DIFNET, for graph representation learning and node classification.
Extensive experiments will be done in this paper to compare DIFNET against several state-of-the-art graph neural network models.
- Score: 10.879701971582502
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Existing graph neural networks may suffer from the "suspended animation
problem" when the model architecture goes deep. Meanwhile, for some graph
learning scenarios, e.g., nodes with text/image attributes or graphs with
long-distance node correlations, deep graph neural networks will be necessary
for effective graph representation learning. In this paper, we propose a new
graph neural network, namely DIFNET (Graph Diffusive Neural Network), for graph
representation learning and node classification. DIFNET utilizes both neural
gates and graph residual learning for node hidden state modeling, and includes
an attention mechanism for node neighborhood information diffusion. Extensive
experiments will be done in this paper to compare DIFNET against several
state-of-the-art graph neural network models. The experimental results can
illustrate both the learning performance advantages and effectiveness of
DIFNET, especially in addressing the "suspended animation problem".
Related papers
- Graph Neural Networks Provably Benefit from Structural Information: A
Feature Learning Perspective [53.999128831324576]
Graph neural networks (GNNs) have pioneered advancements in graph representation learning.
This study investigates the role of graph convolution within the context of feature learning theory.
arXiv Detail & Related papers (2023-06-24T10:21:11Z) - NodeFormer: A Scalable Graph Structure Learning Transformer for Node
Classification [70.51126383984555]
We introduce a novel all-pair message passing scheme for efficiently propagating node signals between arbitrary nodes.
The efficient computation is enabled by a kernerlized Gumbel-Softmax operator.
Experiments demonstrate the promising efficacy of the method in various tasks including node classification on graphs.
arXiv Detail & Related papers (2023-06-14T09:21:15Z) - A Graph Neural Network with Negative Message Passing for Graph Coloring [12.501032566933178]
We propose a graph network model for graph coloring, which is a class of representative heterophilous problems.
We introduce negative message passing into the proposed graph neural network for more effective information exchange.
New loss function taking into account the self-information of the nodes is suggested to accelerate the learning process.
arXiv Detail & Related papers (2023-01-26T15:08:42Z) - Learning Graph Structure from Convolutional Mixtures [119.45320143101381]
We propose a graph convolutional relationship between the observed and latent graphs, and formulate the graph learning task as a network inverse (deconvolution) problem.
In lieu of eigendecomposition-based spectral methods, we unroll and truncate proximal gradient iterations to arrive at a parameterized neural network architecture that we call a Graph Deconvolution Network (GDN)
GDNs can learn a distribution of graphs in a supervised fashion, perform link prediction or edge-weight regression tasks by adapting the loss function, and they are inherently inductive.
arXiv Detail & Related papers (2022-05-19T14:08:15Z) - Graph Neural Diffusion Networks for Semi-supervised Learning [6.376489604292251]
Graph Convolutional Networks (GCN) is a pioneering model for graph-based semi-supervised learning.
We propose a new graph neural network called neural-Nets (for Graph Neural Diffusion Networks) that exploits the local and global neighborhood information.
The adoption of neural networks makes neural diffusions adaptable to different datasets.
arXiv Detail & Related papers (2022-01-24T14:07:56Z) - Learning through structure: towards deep neuromorphic knowledge graph
embeddings [0.5906031288935515]
We propose a strategy to map deep graph learning architectures for knowledge graph reasoning to neuromorphic architectures.
Based on the insight that randomly and untrained graph neural networks are able to preserve local graph structures, we compose a frozen neural network shallow knowledge graph embedding models.
We experimentally show that already on conventional computing hardware, this leads to a significant speedup and memory reduction while maintaining a competitive performance level.
arXiv Detail & Related papers (2021-09-21T18:01:04Z) - Increase and Conquer: Training Graph Neural Networks on Growing Graphs [116.03137405192356]
We consider the problem of learning a graphon neural network (WNN) by training GNNs on graphs sampled Bernoulli from the graphon.
Inspired by these results, we propose an algorithm to learn GNNs on large-scale graphs that, starting from a moderate number of nodes, successively increases the size of the graph during training.
arXiv Detail & Related papers (2021-06-07T15:05:59Z) - Learning Graph Representations [0.0]
Graph Neural Networks (GNNs) are efficient ways to get insight into large dynamic graph datasets.
In this paper, we discuss the graph convolutional neural networks graph autoencoders and Social-temporal graph neural networks.
arXiv Detail & Related papers (2021-02-03T12:07:55Z) - Node2Seq: Towards Trainable Convolutions in Graph Neural Networks [59.378148590027735]
We propose a graph network layer, known as Node2Seq, to learn node embeddings with explicitly trainable weights for different neighboring nodes.
For a target node, our method sorts its neighboring nodes via attention mechanism and then employs 1D convolutional neural networks (CNNs) to enable explicit weights for information aggregation.
In addition, we propose to incorporate non-local information for feature learning in an adaptive manner based on the attention scores.
arXiv Detail & Related papers (2021-01-06T03:05:37Z) - Graph Structure of Neural Networks [104.33754950606298]
We show how the graph structure of neural networks affect their predictive performance.
A "sweet spot" of relational graphs leads to neural networks with significantly improved predictive performance.
Top-performing neural networks have graph structure surprisingly similar to those of real biological neural networks.
arXiv Detail & Related papers (2020-07-13T17:59:31Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.