EEGNN: Edge Enhanced Graph Neural Networks
- URL: http://arxiv.org/abs/2208.06322v1
- Date: Fri, 12 Aug 2022 15:24:55 GMT
- Title: EEGNN: Edge Enhanced Graph Neural Networks
- Authors: Yirui Liu, Xinghao Qiao, Liying Wang and Jessica Lam
- Abstract summary: We propose a new explanation for such deteriorated performance phenomenon, mis-simplification.
We show that such simplifying can reduce the potential of message-passing layers to capture the structural information of graphs.
EEGNN uses the structural information extracted from the proposed Dirichlet mixture Poisson graph model to improve the performance of various deep message-passing GNNs.
- Score: 1.0246596695310175
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Training deep graph neural networks (GNNs) poses a challenging task, as the
performance of GNNs may suffer from the number of hidden message-passing
layers. The literature has focused on the proposals of over-smoothing and
under-reaching to explain the performance deterioration of deep GNNs. In this
paper, we propose a new explanation for such deteriorated performance
phenomenon, mis-simplification, that is, mistakenly simplifying graphs by
preventing self-loops and forcing edges to be unweighted. We show that such
simplifying can reduce the potential of message-passing layers to capture the
structural information of graphs. In view of this, we propose a new framework,
edge enhanced graph neural network(EEGNN). EEGNN uses the structural
information extracted from the proposed Dirichlet mixture Poisson graph model,
a Bayesian nonparametric model for graphs, to improve the performance of
various deep message-passing GNNs. Experiments over different datasets show
that our method achieves considerable performance increase compared to
baselines.
Related papers
- DEGREE: Decomposition Based Explanation For Graph Neural Networks [55.38873296761104]
We propose DEGREE to provide a faithful explanation for GNN predictions.
By decomposing the information generation and aggregation mechanism of GNNs, DEGREE allows tracking the contributions of specific components of the input graph to the final prediction.
We also design a subgraph level interpretation algorithm to reveal complex interactions between graph nodes that are overlooked by previous methods.
arXiv Detail & Related papers (2023-05-22T10:29:52Z) - Towards Training GNNs using Explanation Directed Message Passing [4.014524824655107]
We introduce a novel explanation-directed neural message passing framework for GNNs, EXPASS (EXplainable message PASSing)
We show that EXPASS alleviates the oversmoothing problem in GNNs by slowing the layer wise loss of Dirichlet energy.
Our empirical results show that graph embeddings learned using EXPASS improve the predictive performance and alleviate the oversmoothing problems of GNNs.
arXiv Detail & Related papers (2022-11-30T04:31:26Z) - Spiking Variational Graph Auto-Encoders for Efficient Graph
Representation Learning [10.65760757021534]
We propose an SNN-based deep generative method, namely the Spiking Variational Graph Auto-Encoders (S-VGAE) for efficient graph representation learning.
We conduct link prediction experiments on multiple benchmark graph datasets, and the results demonstrate that our model consumes significantly lower energy with the performances superior or comparable to other ANN- and SNN-based methods for graph representation learning.
arXiv Detail & Related papers (2022-10-24T12:54:41Z) - Gradient Gating for Deep Multi-Rate Learning on Graphs [62.25886489571097]
We present Gradient Gating (G$2$), a novel framework for improving the performance of Graph Neural Networks (GNNs)
Our framework is based on gating the output of GNN layers with a mechanism for multi-rate flow of message passing information across nodes of the underlying graph.
arXiv Detail & Related papers (2022-10-02T13:19:48Z) - Relation Embedding based Graph Neural Networks for Handling
Heterogeneous Graph [58.99478502486377]
We propose a simple yet efficient framework to make the homogeneous GNNs have adequate ability to handle heterogeneous graphs.
Specifically, we propose Relation Embedding based Graph Neural Networks (RE-GNNs), which employ only one parameter per relation to embed the importance of edge type relations and self-loop connections.
arXiv Detail & Related papers (2022-09-23T05:24:18Z) - Training Robust Graph Neural Networks with Topology Adaptive Edge
Dropping [116.26579152942162]
Graph neural networks (GNNs) are processing architectures that exploit graph structural information to model representations from network data.
Despite their success, GNNs suffer from sub-optimal generalization performance given limited training data.
This paper proposes Topology Adaptive Edge Dropping to improve generalization performance and learn robust GNN models.
arXiv Detail & Related papers (2021-06-05T13:20:36Z) - Learning to Drop: Robust Graph Neural Network via Topological Denoising [50.81722989898142]
We propose PTDNet, a parameterized topological denoising network, to improve the robustness and generalization performance of Graph Neural Networks (GNNs)
PTDNet prunes task-irrelevant edges by penalizing the number of edges in the sparsified graph with parameterized networks.
We show that PTDNet can improve the performance of GNNs significantly and the performance gain becomes larger for more noisy datasets.
arXiv Detail & Related papers (2020-11-13T18:53:21Z) - Data-Driven Learning of Geometric Scattering Networks [74.3283600072357]
We propose a new graph neural network (GNN) module based on relaxations of recently proposed geometric scattering transforms.
Our learnable geometric scattering (LEGS) module enables adaptive tuning of the wavelets to encourage band-pass features to emerge in learned representations.
arXiv Detail & Related papers (2020-10-06T01:20:27Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.