Implicit vs Unfolded Graph Neural Networks
- URL: http://arxiv.org/abs/2111.06592v1
- Date: Fri, 12 Nov 2021 07:49:16 GMT
- Title: Implicit vs Unfolded Graph Neural Networks
- Authors: Yongyi Yang, Yangkun Wang, Zengfeng Huang, David Wipf
- Abstract summary: Graph neural networks (GNNs) sometimes struggle to maintain a healthy balance between modeling long-range dependencies and avoiding unintended consequences.
Two separate strategies have recently been proposed, namely implicit and unfolded GNNs.
We provide empirical head-to-head comparisons across a variety of synthetic and public real-world benchmarks.
- Score: 18.084842625063082
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: It has been observed that graph neural networks (GNN) sometimes struggle to
maintain a healthy balance between modeling long-range dependencies across
nodes while avoiding unintended consequences such as oversmoothed node
representations. To address this issue (among other things), two separate
strategies have recently been proposed, namely implicit and unfolded GNNs. The
former treats node representations as the fixed points of a deep equilibrium
model that can efficiently facilitate arbitrary implicit propagation across the
graph with a fixed memory footprint. In contrast, the latter involves treating
graph propagation as the unfolded descent iterations as applied to some
graph-regularized energy function. While motivated differently, in this paper
we carefully elucidate the similarity and differences of these methods,
quantifying explicit situations where the solutions they produced may actually
be equivalent and others where behavior diverges. This includes the analysis of
convergence, representational capacity, and interpretability. We also provide
empirical head-to-head comparisons across a variety of synthetic and public
real-world benchmarks.
Related papers
- NodeFormer: A Scalable Graph Structure Learning Transformer for Node
Classification [70.51126383984555]
We introduce a novel all-pair message passing scheme for efficiently propagating node signals between arbitrary nodes.
The efficient computation is enabled by a kernerlized Gumbel-Softmax operator.
Experiments demonstrate the promising efficacy of the method in various tasks including node classification on graphs.
arXiv Detail & Related papers (2023-06-14T09:21:15Z) - A Fractional Graph Laplacian Approach to Oversmoothing [15.795926248847026]
We generalize the concept of oversmoothing from undirected to directed graphs.
We propose fractional graph Laplacian neural ODEs, which describe non-local dynamics.
Our method is more flexible with respect to the convergence of the graph's Dirichlet energy, thereby mitigating oversmoothing.
arXiv Detail & Related papers (2023-05-22T14:52:33Z) - A Non-Asymptotic Analysis of Oversmoothing in Graph Neural Networks [33.35609077417775]
We characterize the mechanism behind the phenomenon via a non-asymptotic analysis.
We show that oversmoothing happens once the mixing effect starts to dominate the denoising effect.
Our results suggest that while PPR mitigates oversmoothing at deeper layers, PPR-based architectures still achieve their best performance at a shallow depth.
arXiv Detail & Related papers (2022-12-21T00:33:59Z) - Resisting Graph Adversarial Attack via Cooperative Homophilous
Augmentation [60.50994154879244]
Recent studies show that Graph Neural Networks are vulnerable and easily fooled by small perturbations.
In this work, we focus on the emerging but critical attack, namely, Graph Injection Attack.
We propose a general defense framework CHAGNN against GIA through cooperative homophilous augmentation of graph data and model.
arXiv Detail & Related papers (2022-11-15T11:44:31Z) - Relation Embedding based Graph Neural Networks for Handling
Heterogeneous Graph [58.99478502486377]
We propose a simple yet efficient framework to make the homogeneous GNNs have adequate ability to handle heterogeneous graphs.
Specifically, we propose Relation Embedding based Graph Neural Networks (RE-GNNs), which employ only one parameter per relation to embed the importance of edge type relations and self-loop connections.
arXiv Detail & Related papers (2022-09-23T05:24:18Z) - Similarity-aware Positive Instance Sampling for Graph Contrastive
Pre-training [82.68805025636165]
We propose to select positive graph instances directly from existing graphs in the training set.
Our selection is based on certain domain-specific pair-wise similarity measurements.
Besides, we develop an adaptive node-level pre-training method to dynamically mask nodes to distribute them evenly in the graph.
arXiv Detail & Related papers (2022-06-23T20:12:51Z) - Descent Steps of a Relation-Aware Energy Produce Heterogeneous Graph
Neural Networks [25.59092732148598]
Heterogeneous graph neural networks (GNNs) achieve strong performance on node classification tasks in a semi-supervised learning setting.
We propose a novel heterogeneous GNN architecture in which layers are derived from optimization steps that descend a novel relation-aware energy function.
arXiv Detail & Related papers (2022-06-22T13:48:08Z) - Graphon-aided Joint Estimation of Multiple Graphs [24.077455621015552]
We consider the problem of estimating the topology of multiple networks from nodal observations.
We adopt a graphon as our random graph model, which is a nonparametric model from which graphs of potentially different sizes can be drawn.
arXiv Detail & Related papers (2022-02-11T15:20:44Z) - Explicit Pairwise Factorized Graph Neural Network for Semi-Supervised
Node Classification [59.06717774425588]
We propose the Explicit Pairwise Factorized Graph Neural Network (EPFGNN), which models the whole graph as a partially observed Markov Random Field.
It contains explicit pairwise factors to model output-output relations and uses a GNN backbone to model input-output relations.
We conduct experiments on various datasets, which shows that our model can effectively improve the performance for semi-supervised node classification on graphs.
arXiv Detail & Related papers (2021-07-27T19:47:53Z) - On the Importance of Sampling in Learning Graph Convolutional Networks [13.713485304798368]
Graph Convolutional Networks (GCNs) have achieved impressive empirical advancement across a wide variety of graph-related applications.
Despite their success, training GCNs on large graphs suffers from computational and memory issues.
We describe and analyze a general textbftextitdoubly variance reduction schema that can accelerate any sampling method under the memory budget.
arXiv Detail & Related papers (2021-03-03T21:31:23Z) - Block-Approximated Exponential Random Graphs [77.4792558024487]
An important challenge in the field of exponential random graphs (ERGs) is the fitting of non-trivial ERGs on large graphs.
We propose an approximative framework to such non-trivial ERGs that result in dyadic independence (i.e., edge independent) distributions.
Our methods are scalable to sparse graphs consisting of millions of nodes.
arXiv Detail & Related papers (2020-02-14T11:42:16Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.