Tuning the Geometry of Graph Neural Networks
- URL: http://arxiv.org/abs/2207.05887v1
- Date: Tue, 12 Jul 2022 23:28:03 GMT
- Title: Tuning the Geometry of Graph Neural Networks
- Authors: Sowon Jeong, Claire Donnat
- Abstract summary: spatial graph convolution operators have been heralded as key to the success of Graph Neural Networks (GNNs)
We show that this aggregation operator is in fact tunable, and explicit regimes in which certain choices of operators -- and therefore, embedding geometries -- might be more appropriate.
- Score: 0.7614628596146599
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: By recursively summing node features over entire neighborhoods, spatial graph
convolution operators have been heralded as key to the success of Graph Neural
Networks (GNNs). Yet, despite the multiplication of GNN methods across tasks
and applications, the impact of this aggregation operation on their performance
still has yet to be extensively analysed. In fact, while efforts have mostly
focused on optimizing the architecture of the neural network, fewer works have
attempted to characterize (a) the different classes of spatial convolution
operators, (b) how the choice of a particular class relates to properties of
the data , and (c) its impact on the geometry of the embedding space. In this
paper, we propose to answer all three questions by dividing existing operators
into two main classes ( symmetrized vs. row-normalized spatial convolutions),
and show how these translate into different implicit biases on the nature of
the data. Finally, we show that this aggregation operator is in fact tunable,
and explicit regimes in which certain choices of operators -- and therefore,
embedding geometries -- might be more appropriate.
Related papers
- Understanding the Effect of GCN Convolutions in Regression Tasks [8.299692647308323]
Graph Convolutional Networks (GCNs) have become a pivotal method in machine learning for modeling functions over graphs.
This paper provides a formal analysis of the impact of convolution operators on regression tasks over homophilic networks.
arXiv Detail & Related papers (2024-10-26T04:19:52Z) - Scalable Graph Compressed Convolutions [68.85227170390864]
We propose a differentiable method that applies permutations to calibrate input graphs for Euclidean convolution.
Based on the graph calibration, we propose the Compressed Convolution Network (CoCN) for hierarchical graph representation learning.
arXiv Detail & Related papers (2024-07-26T03:14:13Z) - Learning Invariant Representations of Graph Neural Networks via Cluster
Generalization [58.68231635082891]
Graph neural networks (GNNs) have become increasingly popular in modeling graph-structured data.
In this paper, we experimentally find that the performance of GNNs drops significantly when the structure shift happens.
We propose the Cluster Information Transfer (CIT) mechanism, which can learn invariant representations for GNNs.
arXiv Detail & Related papers (2024-03-06T10:36:56Z) - Neural Tangent Kernels Motivate Graph Neural Networks with
Cross-Covariance Graphs [94.44374472696272]
We investigate NTKs and alignment in the context of graph neural networks (GNNs)
Our results establish the theoretical guarantees on the optimality of the alignment for a two-layer GNN.
These guarantees are characterized by the graph shift operator being a function of the cross-covariance between the input and the output data.
arXiv Detail & Related papers (2023-10-16T19:54:21Z) - pathGCN: Learning General Graph Spatial Operators from Paths [9.539495585692007]
We propose pathGCN, a novel approach to learn the spatial operator from random paths on the graph.
Our experiments on numerous datasets suggest that by properly learning both the spatial and point-wise convolutions, phenomena like over-smoothing can be inherently avoided.
arXiv Detail & Related papers (2022-07-15T11:28:11Z) - Graph Ordering Attention Networks [22.468776559433614]
Graph Neural Networks (GNNs) have been successfully used in many problems involving graph-structured data.
We introduce the Graph Ordering Attention (GOAT) layer, a novel GNN component that captures interactions between nodes in a neighborhood.
GOAT layer demonstrates its increased performance in modeling graph metrics that capture complex information.
arXiv Detail & Related papers (2022-04-11T18:13:19Z) - Learning Connectivity with Graph Convolutional Networks for
Skeleton-based Action Recognition [14.924672048447338]
We introduce a novel framework for graph convolutional networks that learns the topological properties of graphs.
The design principle of our method is based on the optimization of a constrained objective function.
Experiments conducted on the challenging task of skeleton-based action recognition shows the superiority of the proposed method.
arXiv Detail & Related papers (2021-12-06T19:43:26Z) - Action Recognition with Kernel-based Graph Convolutional Networks [14.924672048447338]
Learning graph convolutional networks (GCNs) aims at generalizing deep learning to arbitrary non-regular domains.
We introduce a novel GCN framework that achieves spatial graph convolution in a reproducing kernel Hilbert space (RKHS)
The particularity of our GCN model also resides in its ability to achieve convolutions without explicitly realigning nodes in the receptive fields of the learned graph filters with those of the input graphs.
arXiv Detail & Related papers (2020-12-28T11:02:51Z) - Node Similarity Preserving Graph Convolutional Networks [51.520749924844054]
Graph Neural Networks (GNNs) explore the graph structure and node features by aggregating and transforming information within node neighborhoods.
We propose SimP-GCN that can effectively and efficiently preserve node similarity while exploiting graph structure.
We validate the effectiveness of SimP-GCN on seven benchmark datasets including three assortative and four disassorative graphs.
arXiv Detail & Related papers (2020-11-19T04:18:01Z) - Towards Deeper Graph Neural Networks [63.46470695525957]
Graph convolutions perform neighborhood aggregation and represent one of the most important graph operations.
Several recent studies attribute this performance deterioration to the over-smoothing issue.
We propose Deep Adaptive Graph Neural Network (DAGNN) to adaptively incorporate information from large receptive fields.
arXiv Detail & Related papers (2020-07-18T01:11:14Z) - Cross-GCN: Enhancing Graph Convolutional Network with $k$-Order Feature
Interactions [153.6357310444093]
Graph Convolutional Network (GCN) is an emerging technique that performs learning and reasoning on graph data.
We argue that existing designs of GCN forgo modeling cross features, making GCN less effective for tasks or data where cross features are important.
We design a new operator named Cross-feature Graph Convolution, which explicitly models the arbitrary-order cross features with complexity linear to feature dimension and order size.
arXiv Detail & Related papers (2020-03-05T13:05:27Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.