Data efficiency in graph networks through equivariance
- URL: http://arxiv.org/abs/2106.13786v1
- Date: Fri, 25 Jun 2021 17:42:34 GMT
- Title: Data efficiency in graph networks through equivariance
- Authors: Francesco Farina, Emma Slade
- Abstract summary: We introduce a novel architecture for graph networks which is equivariant to any transformation in the coordinate embeddings.
We show that, learning on a minimal amount of data, the architecture we propose can perfectly generalise to unseen data in a synthetic problem.
- Score: 1.713291434132985
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We introduce a novel architecture for graph networks which is equivariant to
any transformation in the coordinate embeddings that preserves the distance
between neighbouring nodes. In particular, it is equivariant to the Euclidean
and conformal orthogonal groups in $n$-dimensions. Thanks to its equivariance
properties, the proposed model is extremely more data efficient with respect to
classical graph architectures and also intrinsically equipped with a better
inductive bias. We show that, learning on a minimal amount of data, the
architecture we propose can perfectly generalise to unseen data in a synthetic
problem, while much more training data are required from a standard model to
reach comparable performance.
Related papers
- Heterogeneous Sheaf Neural Networks [17.664754528494132]
Heterogeneous graphs are commonly used to model relational structures in many real-world applications.
We propose using cellular sheaves to model the heterogeneity in the graph's underlying topology.
We introduce HetSheaf, a general framework for heterogeneous sheaf neural networks, and a series of heterogeneous sheaf predictors.
arXiv Detail & Related papers (2024-09-12T13:38:08Z) - Approximately Equivariant Neural Processes [47.14384085714576]
We consider the use of approximately equivariant architectures in neural processes.
We demonstrate the effectiveness of our approach on a number of synthetic and real-world regression experiments.
arXiv Detail & Related papers (2024-06-19T12:17:14Z) - VTAE: Variational Transformer Autoencoder with Manifolds Learning [144.0546653941249]
Deep generative models have demonstrated successful applications in learning non-linear data distributions through a number of latent variables.
The nonlinearity of the generator implies that the latent space shows an unsatisfactory projection of the data space, which results in poor representation learning.
We show that geodesics and accurate computation can substantially improve the performance of deep generative models.
arXiv Detail & Related papers (2023-04-03T13:13:19Z) - GrannGAN: Graph annotation generative adversarial networks [72.66289932625742]
We consider the problem of modelling high-dimensional distributions and generating new examples of data with complex relational feature structure coherent with a graph skeleton.
The model we propose tackles the problem of generating the data features constrained by the specific graph structure of each data point by splitting the task into two phases.
In the first it models the distribution of features associated with the nodes of the given graph, in the second it complements the edge features conditionally on the node features.
arXiv Detail & Related papers (2022-12-01T11:49:07Z) - Data-heterogeneity-aware Mixing for Decentralized Learning [63.83913592085953]
We characterize the dependence of convergence on the relationship between the mixing weights of the graph and the data heterogeneity across nodes.
We propose a metric that quantifies the ability of a graph to mix the current gradients.
Motivated by our analysis, we propose an approach that periodically and efficiently optimize the metric.
arXiv Detail & Related papers (2022-04-13T15:54:35Z) - Frame Averaging for Invariant and Equivariant Network Design [50.87023773850824]
We introduce Frame Averaging (FA), a framework for adapting known (backbone) architectures to become invariant or equivariant to new symmetry types.
We show that FA-based models have maximal expressive power in a broad setting.
We propose a new class of universal Graph Neural Networks (GNNs), universal Euclidean motion invariant point cloud networks, and Euclidean motion invariant Message Passing (MP) GNNs.
arXiv Detail & Related papers (2021-10-07T11:05:23Z) - Post-mortem on a deep learning contest: a Simpson's paradox and the
complementary roles of scale metrics versus shape metrics [61.49826776409194]
We analyze a corpus of models made publicly-available for a contest to predict the generalization accuracy of neural network (NN) models.
We identify what amounts to a Simpson's paradox: where "scale" metrics perform well overall but perform poorly on sub partitions of the data.
We present two novel shape metrics, one data-independent, and the other data-dependent, which can predict trends in the test accuracy of a series of NNs.
arXiv Detail & Related papers (2021-06-01T19:19:49Z) - Symmetry-driven graph neural networks [1.713291434132985]
We introduce two graph network architectures that are equivariant to several types of transformations affecting the node coordinates.
We demonstrate these capabilities on a synthetic dataset composed of $n$-dimensional geometric objects.
arXiv Detail & Related papers (2021-05-28T18:54:12Z) - Beyond permutation equivariance in graph networks [1.713291434132985]
We introduce a novel architecture for graph networks which is equivariant to the Euclidean group in $n$-dimensions.
Our model is designed to work with graph networks in their most general form, thus including particular variants as special cases.
arXiv Detail & Related papers (2021-03-25T18:36:09Z) - Building powerful and equivariant graph neural networks with structural
message-passing [74.93169425144755]
We propose a powerful and equivariant message-passing framework based on two ideas.
First, we propagate a one-hot encoding of the nodes, in addition to the features, in order to learn a local context matrix around each node.
Second, we propose methods for the parametrization of the message and update functions that ensure permutation equivariance.
arXiv Detail & Related papers (2020-06-26T17:15:16Z) - Isometric Transformation Invariant and Equivariant Graph Convolutional
Networks [5.249805590164902]
We propose a set of transformation invariant and equivariant models based on graph convolutional networks, called IsoGCNs.
We demonstrate that the proposed model has a competitive performance compared to state-of-the-art methods on tasks related to geometrical and physical simulation data.
arXiv Detail & Related papers (2020-05-13T13:44:44Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.