Convolutional Neural Networks on Graphs with Chebyshev Approximation,
Revisited
- URL: http://arxiv.org/abs/2202.03580v5
- Date: Tue, 12 Mar 2024 09:27:38 GMT
- Title: Convolutional Neural Networks on Graphs with Chebyshev Approximation,
Revisited
- Authors: Mingguo He, Zhewei Wei, Ji-Rong Wen
- Abstract summary: ChebNet is one of the early attempts to approximate the spectral graph convolutions using Chebyshevs.
GCN simplifies ChebNet by utilizing only the first two Chebyshevs while still outperforming it on real-world datasets.
We propose ChebNetII, a new GNN model based on Chebyshev, which enhances the original Chebyshev approximation while reducing the Runge phenomenon.
- Score: 80.29747781984135
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Designing spectral convolutional networks is a challenging problem in graph
learning. ChebNet, one of the early attempts, approximates the spectral graph
convolutions using Chebyshev polynomials. GCN simplifies ChebNet by utilizing
only the first two Chebyshev polynomials while still outperforming it on
real-world datasets. GPR-GNN and BernNet demonstrate that the Monomial and
Bernstein bases also outperform the Chebyshev basis in terms of learning the
spectral graph convolutions. Such conclusions are counter-intuitive in the
field of approximation theory, where it is established that the Chebyshev
polynomial achieves the optimum convergent rate for approximating a function.
In this paper, we revisit the problem of approximating the spectral graph
convolutions with Chebyshev polynomials. We show that ChebNet's inferior
performance is primarily due to illegal coefficients learnt by ChebNet
approximating analytic filter functions, which leads to over-fitting. We then
propose ChebNetII, a new GNN model based on Chebyshev interpolation, which
enhances the original Chebyshev polynomial approximation while reducing the
Runge phenomenon. We conducted an extensive experimental study to demonstrate
that ChebNetII can learn arbitrary graph convolutions and achieve superior
performance in both full- and semi-supervised node classification tasks. Most
notably, we scale ChebNetII to a billion graph ogbn-papers100M, showing that
spectral-based GNNs have superior performance. Our code is available at
https://github.com/ivam-he/ChebNetII.
Related papers
- Scalable Graph Compressed Convolutions [68.85227170390864]
We propose a differentiable method that applies permutations to calibrate input graphs for Euclidean convolution.
Based on the graph calibration, we propose the Compressed Convolution Network (CoCN) for hierarchical graph representation learning.
arXiv Detail & Related papers (2024-07-26T03:14:13Z) - Spectral GNN via Two-dimensional (2-D) Graph Convolution [26.79625547648669]
We show that existing spectral GNNs remain critical drawbacks in performing the spectral graph convolution.
We propose a new convolution paradigm, named 2-D graph convolution.
We prove that 2-D graph convolution unifies existing graph convolution paradigms, and is capable to construct arbitrary target output.
arXiv Detail & Related papers (2024-04-06T08:53:26Z) - VC dimension of Graph Neural Networks with Pfaffian activation functions [4.141514895639094]
Graph Neural Networks (GNNs) have emerged in recent years as a powerful tool to learn tasks across a wide range of graph domains.
The aim of our work is to extend this analysis on the VC dimension of GNNs to other commonly used activation functions, such as sigmoid and hyperbolic tangent.
arXiv Detail & Related papers (2024-01-22T21:11:22Z) - Geometric Graph Filters and Neural Networks: Limit Properties and
Discriminability Trade-offs [122.06927400759021]
We study the relationship between a graph neural network (GNN) and a manifold neural network (MNN) when the graph is constructed from a set of points sampled from the manifold.
We prove non-asymptotic error bounds showing that convolutional filters and neural networks on these graphs converge to convolutional filters and neural networks on the continuous manifold.
arXiv Detail & Related papers (2023-05-29T08:27:17Z) - Training Graph Neural Networks on Growing Stochastic Graphs [114.75710379125412]
Graph Neural Networks (GNNs) rely on graph convolutions to exploit meaningful patterns in networked data.
We propose to learn GNNs on very large graphs by leveraging the limit object of a sequence of growing graphs, the graphon.
arXiv Detail & Related papers (2022-10-27T16:00:45Z) - BernNet: Learning Arbitrary Graph Spectral Filters via Bernstein
Approximation [47.88982193039535]
We propose $textitBernNet$, a novel graph neural network with theoretical support that provides a simple but effective scheme for designing and learning arbitrary graph spectral filters.
In particular, for any filter over the normalized Laplacian spectrum of a graph, our BernNet estimates it by an order-$K$ Bernstein approximation and designs its spectral property by setting the coefficients of the Bernstein basis.
Our experiments demonstrate that BernNet can learn arbitrary spectral filters, including complicated band-rejection and comb filters, and it achieves superior performance in real-world graph modeling tasks.
arXiv Detail & Related papers (2021-06-21T11:26:06Z) - Multi-hop Graph Convolutional Network with High-order Chebyshev
Approximation for Text Reasoning [15.65069702939315]
We define the spectral graph convolutional network with the high-order dynamic Chebyshev approximation (HDGCN)
To alleviate the over-smoothing in high-order Chebyshev approximation, a multi-vote-based cross-attention (MVCAttn) with linear computation complexity is also proposed.
arXiv Detail & Related papers (2021-06-08T07:49:43Z) - GraphSVX: Shapley Value Explanations for Graph Neural Networks [81.83769974301995]
Graph Neural Networks (GNNs) achieve significant performance for various learning tasks on geometric data.
In this paper, we propose a unified framework satisfied by most existing GNN explainers.
We introduce GraphSVX, a post hoc local model-agnostic explanation method specifically designed for GNNs.
arXiv Detail & Related papers (2021-04-18T10:40:37Z) - Towards Efficient Graph Convolutional Networks for Point Cloud Handling [181.59146413326056]
We aim at improving the computational efficiency of graph convolutional networks (GCNs) for learning on point clouds.
A series of experiments show that optimized networks have reduced computational complexity, decreased memory consumption, and accelerated inference speed.
arXiv Detail & Related papers (2021-04-12T17:59:16Z) - Bridging the Gap Between Spectral and Spatial Domains in Graph Neural
Networks [8.563354084119062]
We show some equivalence of the graph convolution process regardless it is designed in the spatial or the spectral domain.
The proposed framework is used to design new convolutions in spectral domain with a custom frequency profile while applying them in the spatial domain.
arXiv Detail & Related papers (2020-03-26T01:49:24Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.