On the Correspondence Between Monotonic Max-Sum GNNs and Datalog
- URL: http://arxiv.org/abs/2305.18015v3
- Date: Thu, 15 Jun 2023 09:22:01 GMT
- Title: On the Correspondence Between Monotonic Max-Sum GNNs and Datalog
- Authors: David Tena Cucala, Bernardo Cuenca Grau, Boris Motik, Egor V. Kostylev
- Abstract summary: We study data transformations based on graph neural networks (GNNs)
We study the expressivity of monotonic max-sum GNNs, which cover a subclass of GNNs with max and sum aggregation functions.
- Score: 19.288835943223816
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Although there has been significant interest in applying machine learning
techniques to structured data, the expressivity (i.e., a description of what
can be learned) of such techniques is still poorly understood. In this paper,
we study data transformations based on graph neural networks (GNNs). First, we
note that the choice of how a dataset is encoded into a numeric form
processable by a GNN can obscure the characterisation of a model's
expressivity, and we argue that a canonical encoding provides an appropriate
basis. Second, we study the expressivity of monotonic max-sum GNNs, which cover
a subclass of GNNs with max and sum aggregation functions. We show that, for
each such GNN, one can compute a Datalog program such that applying the GNN to
any dataset produces the same facts as a single round of application of the
program's rules to the dataset. Monotonic max-sum GNNs can sum an unbounded
number of feature vectors which can result in arbitrarily large feature values,
whereas rule application requires only a bounded number of constants. Hence,
our result shows that the unbounded summation of monotonic max-sum GNNs does
not increase their expressive power. Third, we sharpen our result to the
subclass of monotonic max GNNs, which use only the max aggregation function,
and identify a corresponding class of Datalog programs.
Related papers
- Contextualized Messages Boost Graph Representations [1.5178009359320295]
This paper investigates the ability of graph networks (GNNs) to process data that may be represented as graphs.
It shows that only a few GNNs are investigated across all levels of capability.
A mathematical discussion on the relationship between SIRGCN and widely used GNNs is laid out to put the contribution into context.
arXiv Detail & Related papers (2024-03-19T08:05:49Z) - MAG-GNN: Reinforcement Learning Boosted Graph Neural Network [68.60884768323739]
A particular line of work proposed subgraph GNNs that use subgraph information to improve GNNs' expressivity and achieved great success.
Such effectivity sacrifices the efficiency of GNNs by enumerating all possible subgraphs.
We propose Magnetic Graph Neural Network (MAG-GNN), a reinforcement learning (RL) boosted GNN, to solve the problem.
arXiv Detail & Related papers (2023-10-29T20:32:21Z) - Some Might Say All You Need Is Sum [2.226803104060345]
The expressivity of Graph Neural Networks (GNNs) is dependent on the aggregation functions they employ.
We prove that basic functions, which can be computed exactly by Mean or Max GNNs, are inapproximable by any Sum GNN.
arXiv Detail & Related papers (2023-02-22T19:01:52Z) - MGNN: Graph Neural Networks Inspired by Distance Geometry Problem [28.789684784093048]
Graph Neural Networks (GNNs) have emerged as a prominent research topic in the field of machine learning.
In this paper, we propose a GNN model inspired by the congruent-inphilic property of the classifiers in the classification phase of GNNs.
We extensively evaluate the effectiveness of our model through experiments conducted on both synthetic and real-world datasets.
arXiv Detail & Related papers (2022-01-31T04:15:42Z) - VQ-GNN: A Universal Framework to Scale up Graph Neural Networks using
Vector Quantization [70.8567058758375]
VQ-GNN is a universal framework to scale up any convolution-based GNNs using Vector Quantization (VQ) without compromising the performance.
Our framework avoids the "neighbor explosion" problem of GNNs using quantized representations combined with a low-rank version of the graph convolution matrix.
arXiv Detail & Related papers (2021-10-27T11:48:50Z) - On the approximation capability of GNNs in node
classification/regression tasks [4.141514895639094]
Graph Neural Networks (GNNs) are a broad class of connectionist models for graph processing.
We show that GNNs are universal approximators in probability for node classification/regression tasks.
arXiv Detail & Related papers (2021-06-16T17:46:51Z) - A Unified View on Graph Neural Networks as Graph Signal Denoising [49.980783124401555]
Graph Neural Networks (GNNs) have risen to prominence in learning representations for graph structured data.
In this work, we establish mathematically that the aggregation processes in a group of representative GNN models can be regarded as solving a graph denoising problem.
We instantiate a novel GNN model, ADA-UGNN, derived from UGNN, to handle graphs with adaptive smoothness across nodes.
arXiv Detail & Related papers (2020-10-05T04:57:18Z) - The Surprising Power of Graph Neural Networks with Random Node
Initialization [54.4101931234922]
Graph neural networks (GNNs) are effective models for representation learning on relational data.
Standard GNNs are limited in their expressive power, as they cannot distinguish beyond the capability of the Weisfeiler-Leman graph isomorphism.
In this work, we analyze the expressive power of GNNs with random node (RNI)
We prove that these models are universal, a first such result for GNNs not relying on computationally demanding higher-order properties.
arXiv Detail & Related papers (2020-10-02T19:53:05Z) - Distance Encoding: Design Provably More Powerful Neural Networks for
Graph Representation Learning [63.97983530843762]
Graph Neural Networks (GNNs) have achieved great success in graph representation learning.
GNNs generate identical representations for graph substructures that may in fact be very different.
More powerful GNNs, proposed recently by mimicking higher-order tests, are inefficient as they cannot sparsity of underlying graph structure.
We propose Distance Depiction (DE) as a new class of graph representation learning.
arXiv Detail & Related papers (2020-08-31T23:15:40Z) - Expressive Power of Invariant and Equivariant Graph Neural Networks [10.419350129060598]
We show that Folklore Graph Neural Networks (FGNN) are the most expressive architectures proposed so far for a given tensor order.
FGNNs are able to learn how to solve the problem, leading to much better average performances than existing algorithms.
arXiv Detail & Related papers (2020-06-28T16:35:45Z) - Eigen-GNN: A Graph Structure Preserving Plug-in for GNNs [95.63153473559865]
Graph Neural Networks (GNNs) are emerging machine learning models on graphs.
Most existing GNN models in practice are shallow and essentially feature-centric.
We show empirically and analytically that the existing shallow GNNs cannot preserve graph structures well.
We propose Eigen-GNN, a plug-in module to boost GNNs ability in preserving graph structures.
arXiv Detail & Related papers (2020-06-08T02:47:38Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.