Graph Neural Networks with Adaptive Readouts
- URL: http://arxiv.org/abs/2211.04952v1
- Date: Wed, 9 Nov 2022 15:21:09 GMT
- Title: Graph Neural Networks with Adaptive Readouts
- Authors: David Buterez, Jon Paul Janet, Steven J. Kiddle, Dino Oglic, Pietro
Li\`o
- Abstract summary: We show the effectiveness of neural readouts on more than 40 datasets spanning different domains and graph characteristics.
We observe a consistent improvement over standard readouts relative to the number of neighborhood aggregation and different convolutional operators.
- Score: 5.575293536755126
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: An effective aggregation of node features into a graph-level representation
via readout functions is an essential step in numerous learning tasks involving
graph neural networks. Typically, readouts are simple and non-adaptive
functions designed such that the resulting hypothesis space is permutation
invariant. Prior work on deep sets indicates that such readouts might require
complex node embeddings that can be difficult to learn via standard
neighborhood aggregation schemes. Motivated by this, we investigate the
potential of adaptive readouts given by neural networks that do not necessarily
give rise to permutation invariant hypothesis spaces. We argue that in some
problems such as binding affinity prediction where molecules are typically
presented in a canonical form it might be possible to relax the constraints on
permutation invariance of the hypothesis space and learn a more effective model
of the affinity by employing an adaptive readout function. Our empirical
results demonstrate the effectiveness of neural readouts on more than 40
datasets spanning different domains and graph characteristics. Moreover, we
observe a consistent improvement over standard readouts (i.e., sum, max, and
mean) relative to the number of neighborhood aggregation iterations and
different convolutional operators.
Related papers
- A rank decomposition for the topological classification of neural representations [0.0]
In this work, we leverage the fact that neural networks are equivalent to continuous piecewise-affine maps.
We study the homology groups of the quotient of a manifold $mathcalM$ and a subset $A$, assuming some minimal properties on these spaces.
We show that in randomly narrow networks, there will be regions in which the (co)homology groups of a data manifold can change.
arXiv Detail & Related papers (2024-04-30T17:01:20Z) - NodeFormer: A Scalable Graph Structure Learning Transformer for Node
Classification [70.51126383984555]
We introduce a novel all-pair message passing scheme for efficiently propagating node signals between arbitrary nodes.
The efficient computation is enabled by a kernerlized Gumbel-Softmax operator.
Experiments demonstrate the promising efficacy of the method in various tasks including node classification on graphs.
arXiv Detail & Related papers (2023-06-14T09:21:15Z) - Equivariance with Learned Canonicalization Functions [77.32483958400282]
We show that learning a small neural network to perform canonicalization is better than using predefineds.
Our experiments show that learning the canonicalization function is competitive with existing techniques for learning equivariant functions across many tasks.
arXiv Detail & Related papers (2022-11-11T21:58:15Z) - Going Deeper into Permutation-Sensitive Graph Neural Networks [6.685139672294716]
We devise an efficient permutation-sensitive aggregation mechanism via permutation groups.
We prove that our approach is strictly more powerful than the 2-dimensional Weisfeiler-Lehman (2-WL) graph isomorphism test.
arXiv Detail & Related papers (2022-05-28T08:22:10Z) - Bagged Polynomial Regression and Neural Networks [0.0]
Series and dataset regression are able to approximate the same function classes as neural networks.
textitbagged regression (BPR) is an attractive alternative to neural networks.
BPR performs as well as neural networks in crop classification using satellite data.
arXiv Detail & Related papers (2022-05-17T19:55:56Z) - Topographic VAEs learn Equivariant Capsules [84.33745072274942]
We introduce the Topographic VAE: a novel method for efficiently training deep generative models with topographically organized latent variables.
We show that such a model indeed learns to organize its activations according to salient characteristics such as digit class, width, and style on MNIST.
We demonstrate approximate equivariance to complex transformations, expanding upon the capabilities of existing group equivariant neural networks.
arXiv Detail & Related papers (2021-09-03T09:25:57Z) - Building powerful and equivariant graph neural networks with structural
message-passing [74.93169425144755]
We propose a powerful and equivariant message-passing framework based on two ideas.
First, we propagate a one-hot encoding of the nodes, in addition to the features, in order to learn a local context matrix around each node.
Second, we propose methods for the parametrization of the message and update functions that ensure permutation equivariance.
arXiv Detail & Related papers (2020-06-26T17:15:16Z) - Learning What Makes a Difference from Counterfactual Examples and
Gradient Supervision [57.14468881854616]
We propose an auxiliary training objective that improves the generalization capabilities of neural networks.
We use pairs of minimally-different examples with different labels, a.k.a counterfactual or contrasting examples, which provide a signal indicative of the underlying causal structure of the task.
Models trained with this technique demonstrate improved performance on out-of-distribution test sets.
arXiv Detail & Related papers (2020-04-20T02:47:49Z) - PushNet: Efficient and Adaptive Neural Message Passing [1.9121961872220468]
Message passing neural networks have recently evolved into a state-of-the-art approach to representation learning on graphs.
Existing methods perform synchronous message passing along all edges in multiple subsequent rounds.
We consider a novel asynchronous message passing approach where information is pushed only along the most relevant edges until convergence.
arXiv Detail & Related papers (2020-03-04T18:15:30Z) - Learn to Predict Sets Using Feed-Forward Neural Networks [63.91494644881925]
This paper addresses the task of set prediction using deep feed-forward neural networks.
We present a novel approach for learning to predict sets with unknown permutation and cardinality.
We demonstrate the validity of our set formulations on relevant vision problems.
arXiv Detail & Related papers (2020-01-30T01:52:07Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.