Barron Space for Graph Convolution Neural Networks
- URL: http://arxiv.org/abs/2311.02838v1
- Date: Mon, 6 Nov 2023 02:58:05 GMT
- Title: Barron Space for Graph Convolution Neural Networks
- Authors: Seok-Young Chung and Qiyu Sun
- Abstract summary: Graph convolutional neural network (GCNN) operates on graph domain.
In this paper, we introduce a Barron space of functions on a compact domain of graph signals.
- Score: 4.980329703241598
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Graph convolutional neural network (GCNN) operates on graph domain and it has
achieved a superior performance to accomplish a wide range of tasks. In this
paper, we introduce a Barron space of functions on a compact domain of graph
signals. We prove that the proposed Barron space is a reproducing kernel Banach
space, it can be decomposed into the union of a family of reproducing kernel
Hilbert spaces with neuron kernels, and it could be dense in the space of
continuous functions on the domain. Approximation property is one of the main
principles to design neural networks. In this paper, we show that outputs of
GCNNs are contained in the Barron space and functions in the Barron space can
be well approximated by outputs of some GCNNs in the integrated square and
uniform measurements. We also estimate the Rademacher complexity of functions
with bounded Barron norm and conclude that functions in the Barron space could
be learnt from their random samples efficiently.
Related papers
- Spatio-Spectral Graph Neural Networks [50.277959544420455]
We propose Spatio-Spectral Graph Networks (S$2$GNNs)
S$2$GNNs combine spatially and spectrally parametrized graph filters.
We show that S$2$GNNs vanquish over-squashing and yield strictly tighter approximation-theoretic error bounds than MPGNNs.
arXiv Detail & Related papers (2024-05-29T14:28:08Z) - Learning with Norm Constrained, Over-parameterized, Two-layer Neural Networks [54.177130905659155]
Recent studies show that a reproducing kernel Hilbert space (RKHS) is not a suitable space to model functions by neural networks.
In this paper, we study a suitable function space for over- parameterized two-layer neural networks with bounded norms.
arXiv Detail & Related papers (2024-04-29T15:04:07Z) - Multi-layer random features and the approximation power of neural networks [4.178980693837599]
We prove that a reproducing kernel Hilbert space contains only functions that can be approximated by the architecture.
We show that if eigenvalues of the integral operator of the NNGP decay slower than $k-n-frac23$ where $k$ is an order of an eigenvalue, our theorem guarantees a more succinct neural network approximation than Barron's theorem.
arXiv Detail & Related papers (2024-04-26T14:57:56Z) - Global universal approximation of functional input maps on weighted
spaces [3.8059763597999012]
We introduce so-called functional input neural networks defined on a possibly infinite dimensional weighted space with values also in a possibly infinite dimensional output space.
We prove a global universal approximation result on weighted spaces for continuous functions going beyond the usual approximation on compact sets.
We emphasize that the reproducing Hilbert kernel space of the signature kernels are Cameron-Martin spaces of certain Gaussian processes.
arXiv Detail & Related papers (2023-06-05T23:06:32Z) - Duality for Neural Networks through Reproducing Kernel Banach Spaces [1.3750624267664155]
We show that Reproducing Kernel Banach spaces (RKBS) can be understood as an infinite union of RKHS spaces.
As the RKBS is not a Hilbert space, it is not its own dual space. However, we show that its dual space is again an RKBS where the roles of the data and parameters are interchanged.
This allows us to construct the saddle point problem for neural networks, which can be used in the whole field of primal-dual optimisation.
arXiv Detail & Related papers (2022-11-09T16:52:39Z) - Graph Neural Network Bandits [89.31889875864599]
We consider the bandit optimization problem with the reward function defined over graph-structured data.
Key challenges in this setting are scaling to large domains, and to graphs with many nodes.
We show that graph neural networks (GNNs) can be used to estimate the reward function.
arXiv Detail & Related papers (2022-07-13T18:12:36Z) - Superiority of GNN over NN in generalizing bandlimited functions [6.3151583550712065]
Graph Neural Networks (GNNs) have emerged as formidable resources for processing graph-based information across diverse applications.
In this study, we investigate the proficiency of GNNs for such classifications, which can also be cast as a function problem.
Our findings highlight a pronounced efficiency in utilizing GNNs to generalize a bandlimited function within an $varepsilon$-error margin.
arXiv Detail & Related papers (2022-06-13T05:15:12Z) - Generative Adversarial Neural Operators [59.21759531471597]
We propose the generative adversarial neural operator (GANO), a generative model paradigm for learning probabilities on infinite-dimensional function spaces.
GANO consists of two main components, a generator neural operator and a discriminator neural functional.
We empirically study GANOs in controlled cases where both input and output functions are samples from GRFs and compare its performance to the finite-dimensional counterpart GAN.
arXiv Detail & Related papers (2022-05-06T05:12:22Z) - Neighbor2Seq: Deep Learning on Massive Graphs by Transforming Neighbors
to Sequences [55.329402218608365]
We propose the Neighbor2Seq to transform the hierarchical neighborhood of each node into a sequence.
We evaluate our method on a massive graph with more than 111 million nodes and 1.6 billion edges.
Results show that our proposed method is scalable to massive graphs and achieves superior performance across massive and medium-scale graphs.
arXiv Detail & Related papers (2022-02-07T16:38:36Z) - Representation formulas and pointwise properties for Barron functions [8.160343645537106]
We study the natural function space for infinitely wide two-layer neural networks with ReLU activation (Barron space)
We show that functions whose singular set is fractal or curved cannot be represented by infinitely wide two-layer networks with finite path-norm.
This result suggests that two-layer neural networks may be able to approximate a greater variety of functions than commonly believed.
arXiv Detail & Related papers (2020-06-10T17:55:31Z) - Eigen-GNN: A Graph Structure Preserving Plug-in for GNNs [95.63153473559865]
Graph Neural Networks (GNNs) are emerging machine learning models on graphs.
Most existing GNN models in practice are shallow and essentially feature-centric.
We show empirically and analytically that the existing shallow GNNs cannot preserve graph structures well.
We propose Eigen-GNN, a plug-in module to boost GNNs ability in preserving graph structures.
arXiv Detail & Related papers (2020-06-08T02:47:38Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.