Learning Parametrised Graph Shift Operators
- URL: http://arxiv.org/abs/2101.10050v1
- Date: Mon, 25 Jan 2021 13:01:26 GMT
- Title: Learning Parametrised Graph Shift Operators
- Authors: George Dasoulas, Johannes Lutzeyer, Michalis Vazirgiannis
- Abstract summary: Network data is, implicitly or explicitly, always represented using a graph shift operator (GSO)
The PGSO is suggested as a replacement of the standard GSOs that are used in state-of-the-art GNN architectures.
The accuracy of state-of-the-art GNN architectures is improved by the inclusion of the PGSO in both node- and graph-classification tasks.
- Score: 16.89638650246974
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In many domains data is currently represented as graphs and therefore, the
graph representation of this data becomes increasingly important in machine
learning. Network data is, implicitly or explicitly, always represented using a
graph shift operator (GSO) with the most common choices being the adjacency,
Laplacian matrices and their normalisations. In this paper, a novel
parametrised GSO (PGSO) is proposed, where specific parameter values result in
the most commonly used GSOs and message-passing operators in graph neural
network (GNN) frameworks. The PGSO is suggested as a replacement of the
standard GSOs that are used in state-of-the-art GNN architectures and the
optimisation of the PGSO parameters is seamlessly included in the model
training. It is proved that the PGSO has real eigenvalues and a set of real
eigenvectors independent of the parameter values and spectral bounds on the
PGSO are derived. PGSO parameters are shown to adapt to the sparsity of the
graph structure in a study on stochastic blockmodel networks, where they are
found to automatically replicate the GSO regularisation found in the
literature. On several real-world datasets the accuracy of state-of-the-art GNN
architectures is improved by the inclusion of the PGSO in both node- and
graph-classification tasks.
Related papers
- Centrality Graph Shift Operators for Graph Neural Networks [21.136895833789442]
We study Centrality GSOs (CGSOs) which normalize adjacency matrices by global centrality metrics.
We show how our CGSO can act as the message passing operator in any Graph Neural Network.
arXiv Detail & Related papers (2024-11-07T12:32:24Z) - Multi-duplicated Characterization of Graph Structures using Information
Gain Ratio for Graph Neural Networks [0.0]
Various graph neural networks (GNNs) have been proposed to solve node classification tasks in machine learning for graph data.
We propose multi-duplicated characterization of graph structures using information gain ratio (IGR) for GNNs (MSI-GNN)
We show that our MSI-GNN outperforms GCN, H2GCN, and GCNII in terms of average accuracies in benchmark graph datasets.
arXiv Detail & Related papers (2022-12-24T08:56:21Z) - Relation Embedding based Graph Neural Networks for Handling
Heterogeneous Graph [58.99478502486377]
We propose a simple yet efficient framework to make the homogeneous GNNs have adequate ability to handle heterogeneous graphs.
Specifically, we propose Relation Embedding based Graph Neural Networks (RE-GNNs), which employ only one parameter per relation to embed the importance of edge type relations and self-loop connections.
arXiv Detail & Related papers (2022-09-23T05:24:18Z) - MentorGNN: Deriving Curriculum for Pre-Training GNNs [61.97574489259085]
We propose an end-to-end model named MentorGNN that aims to supervise the pre-training process of GNNs across graphs.
We shed new light on the problem of domain adaption on relational data (i.e., graphs) by deriving a natural and interpretable upper bound on the generalization error of the pre-trained GNNs.
arXiv Detail & Related papers (2022-08-21T15:12:08Z) - Graph Spectral Embedding using the Geodesic Betweeness Centrality [76.27138343125985]
We introduce the Graph Sylvester Embedding (GSE), an unsupervised graph representation of local similarity, connectivity, and global structure.
GSE uses the solution of the Sylvester equation to capture both network structure and neighborhood proximity in a single representation.
arXiv Detail & Related papers (2022-05-07T04:11:23Z) - SLGCN: Structure Learning Graph Convolutional Networks for Graphs under
Heterophily [5.619890178124606]
We propose a structure learning graph convolutional networks (SLGCNs) to alleviate the issue from two aspects.
Specifically, we design a efficient-spectral-clustering with anchors (ESC-ANCH) approach to efficiently aggregate feature representations from all similar nodes.
Experimental results on a wide range of benchmark datasets illustrate that the proposed SLGCNs outperform the stat-of-the-art GNN counterparts.
arXiv Detail & Related papers (2021-05-28T13:00:38Z) - Permutation-equivariant and Proximity-aware Graph Neural Networks with
Stochastic Message Passing [88.30867628592112]
Graph neural networks (GNNs) are emerging machine learning models on graphs.
Permutation-equivariance and proximity-awareness are two important properties highly desirable for GNNs.
We show that existing GNNs, mostly based on the message-passing mechanism, cannot simultaneously preserve the two properties.
In order to preserve node proximities, we augment the existing GNNs with node representations.
arXiv Detail & Related papers (2020-09-05T16:46:56Z) - Adaptive Universal Generalized PageRank Graph Neural Network [36.850433364139924]
Graph neural networks (GNNs) are designed to exploit both sources of evidence but they do not optimally trade-off their utility.
We introduce a new Generalized PageRank (GPR) GNN architecture that adaptively learns the GPR weights.
GPR-GNN offers significant performance improvement compared to existing techniques on both synthetic and benchmark data.
arXiv Detail & Related papers (2020-06-14T19:27:39Z) - Pointer Graph Networks [48.44209547013781]
Graph neural networks (GNNs) are typically applied to static graphs that are assumed to be known upfront.
Pointer Graph Networks (PGNs) augment sets or graphs with additional inferred edges for improved model generalisation ability.
PGNs allow each node to dynamically point to another node, followed by message passing over these pointers.
arXiv Detail & Related papers (2020-06-11T12:52:31Z) - Graphs, Convolutions, and Neural Networks: From Graph Filters to Graph
Neural Networks [183.97265247061847]
We leverage graph signal processing to characterize the representation space of graph neural networks (GNNs)
We discuss the role of graph convolutional filters in GNNs and show that any architecture built with such filters has the fundamental properties of permutation equivariance and stability to changes in the topology.
We also study the use of GNNs in recommender systems and learning decentralized controllers for robot swarms.
arXiv Detail & Related papers (2020-03-08T13:02:15Z) - Gated Graph Recurrent Neural Networks [176.3960927323358]
We introduce Graph Recurrent Neural Networks (GRNNs) as a general learning framework for graph processes.
To address the problem of vanishing gradients, we put forward GRNNs with three different gating mechanisms: time, node and edge gates.
The numerical results also show that GRNNs outperform GNNs and RNNs, highlighting the importance of taking both the temporal and graph structures of a graph process into account.
arXiv Detail & Related papers (2020-02-03T22:35:14Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.