Rethinking Graph Neural Networks for the Graph Coloring Problem
- URL: http://arxiv.org/abs/2208.06975v1
- Date: Mon, 15 Aug 2022 02:24:26 GMT
- Title: Rethinking Graph Neural Networks for the Graph Coloring Problem
- Authors: Wei Li, Ruxuan Li, Yuzhe Ma, Siu On Chan, David Pan, Bei Yu
- Abstract summary: We observe that state-of-the-art GNNs are less successful in the graph coloring problem.
In this paper, we focus on the aggregation-combine GNNs (AC-GNNs), a popular class of GNNs.
We show that any AC-GNN is a local coloring method, and any local coloring method is non-optimal by exploring the limits of local methods over sparse random graphs.
- Score: 14.102597635893083
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Graph coloring, a classical and critical NP-hard problem, is the problem of
assigning connected nodes as different colors as possible. However, we observe
that state-of-the-art GNNs are less successful in the graph coloring problem.
We analyze the reasons from two perspectives. First, most GNNs fail to
generalize the task under homophily to heterophily, i.e., graphs where
connected nodes are assigned different colors. Second, GNNs are bounded by the
network depth, making them possible to be a local method, which has been
demonstrated to be non-optimal in Maximum Independent Set (MIS) problem. In
this paper, we focus on the aggregation-combine GNNs (AC-GNNs), a popular class
of GNNs. We first define the power of AC-GNNs in the coloring problem as the
capability to assign nodes different colors. The definition is different with
previous one that is based on the assumption of homophily. We identify node
pairs that AC-GNNs fail to discriminate. Furthermore, we show that any AC-GNN
is a local coloring method, and any local coloring method is non-optimal by
exploring the limits of local methods over sparse random graphs, thereby
demonstrating the non-optimality of AC-GNNs due to its local property. We then
prove the positive correlation between model depth and its coloring power.
Moreover, we discuss the color equivariance of graphs to tackle some practical
constraints such as the pre-fixing constraints. Following the discussions
above, we summarize a series of rules a series of rules that make a GNN color
equivariant and powerful in the coloring problem. Then, we propose a simple
AC-GNN variation satisfying these rules. We empirically validate our
theoretical findings and demonstrate that our simple model substantially
outperforms state-of-the-art heuristic algorithms in both quality and runtime.
Related papers
- Spectral Greedy Coresets for Graph Neural Networks [61.24300262316091]
The ubiquity of large-scale graphs in node-classification tasks hinders the real-world applications of Graph Neural Networks (GNNs)
This paper studies graph coresets for GNNs and avoids the interdependence issue by selecting ego-graphs based on their spectral embeddings.
Our spectral greedy graph coreset (SGGC) scales to graphs with millions of nodes, obviates the need for model pre-training, and applies to low-homophily graphs.
arXiv Detail & Related papers (2024-05-27T17:52:12Z) - Implicit Graph Neural Diffusion Networks: Convergence, Generalization,
and Over-Smoothing [7.984586585987328]
Implicit Graph Neural Networks (GNNs) have achieved significant success in addressing graph learning problems.
We introduce a geometric framework for designing implicit graph diffusion layers based on a parameterized graph Laplacian operator.
We show how implicit GNN layers can be viewed as the fixed-point equation of a Dirichlet energy minimization problem.
arXiv Detail & Related papers (2023-08-07T05:22:33Z) - Training Graph Neural Networks on Growing Stochastic Graphs [114.75710379125412]
Graph Neural Networks (GNNs) rely on graph convolutions to exploit meaningful patterns in networked data.
We propose to learn GNNs on very large graphs by leveraging the limit object of a sequence of growing graphs, the graphon.
arXiv Detail & Related papers (2022-10-27T16:00:45Z) - Graph Neural Network Bandits [89.31889875864599]
We consider the bandit optimization problem with the reward function defined over graph-structured data.
Key challenges in this setting are scaling to large domains, and to graphs with many nodes.
We show that graph neural networks (GNNs) can be used to estimate the reward function.
arXiv Detail & Related papers (2022-07-13T18:12:36Z) - Transferability Properties of Graph Neural Networks [125.71771240180654]
Graph neural networks (GNNs) are provably successful at learning representations from data supported on moderate-scale graphs.
We study the problem of training GNNs on graphs of moderate size and transferring them to large-scale graphs.
Our results show that (i) the transference error decreases with the graph size, and (ii) that graph filters have a transferability-discriminability tradeoff that in GNNs is alleviated by the scattering behavior of the nonlinearity.
arXiv Detail & Related papers (2021-12-09T00:08:09Z) - Graph Neural Networks with Local Graph Parameters [1.8600631687568656]
Local graph parameters can be added to any Graph Neural Networks (GNNs) architecture.
Our results connect GNNs with deep results in finite model theory and finite variable logics.
arXiv Detail & Related papers (2021-06-12T07:43:51Z) - On the Universality of Graph Neural Networks on Large Random Graphs [22.720758067273195]
We study the approximation power of Graph Neural Networks (GNNs) on latent position random graphs.
In the large graph limit, GNNs are known to converge to certain "continuous" models known as c-GNNs.
We show that c-SGNNs are strictly more powerful than c-GNNs in the continuous limit.
arXiv Detail & Related papers (2021-05-27T12:52:36Z) - Distance Encoding: Design Provably More Powerful Neural Networks for
Graph Representation Learning [63.97983530843762]
Graph Neural Networks (GNNs) have achieved great success in graph representation learning.
GNNs generate identical representations for graph substructures that may in fact be very different.
More powerful GNNs, proposed recently by mimicking higher-order tests, are inefficient as they cannot sparsity of underlying graph structure.
We propose Distance Depiction (DE) as a new class of graph representation learning.
arXiv Detail & Related papers (2020-08-31T23:15:40Z) - Graph Neural Networks: Architectures, Stability and Transferability [176.3960927323358]
Graph Neural Networks (GNNs) are information processing architectures for signals supported on graphs.
They are generalizations of convolutional neural networks (CNNs) in which individual layers contain banks of graph convolutional filters.
arXiv Detail & Related papers (2020-08-04T18:57:36Z) - Generalization and Representational Limits of Graph Neural Networks [46.20253808402385]
We prove that several important graph properties cannot be computed by graph neural networks (GNNs) that rely entirely on local information.
We provide the first data dependent generalization bounds for message passing GNNs.
Our bounds are much tighter than existing VC-dimension based guarantees for GNNs, and are comparable to Rademacher bounds for recurrent neural networks.
arXiv Detail & Related papers (2020-02-14T18:10:14Z) - Random Features Strengthen Graph Neural Networks [40.60905158071766]
Graph neural networks (GNNs) are powerful machine learning models for various graph learning tasks.
In this paper, we demonstrate that GNNs become powerful just by adding a random feature to each node.
We show that the addition of random features enables GNNs to solve various problems that normal GNNs, including the graph convolutional networks (GCNs) and graph isomorphism networks (GINs) cannot solve.
arXiv Detail & Related papers (2020-02-08T12:47:29Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.