Permute Me Softly: Learning Soft Permutations for Graph Representations
- URL: http://arxiv.org/abs/2110.01872v1
- Date: Tue, 5 Oct 2021 08:20:51 GMT
- Title: Permute Me Softly: Learning Soft Permutations for Graph Representations
- Authors: Giannis Nikolentzos and George Dasoulas and Michalis Vazirgiannis
- Abstract summary: Graph neural networks (GNNs) have emerged as a dominant paradigm for machine learning with graphs.
Research on MPNNs has mainly focused on the family of message passing neural networks (MPNNs)
- Score: 26.581982471005674
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Graph neural networks (GNNs) have recently emerged as a dominant paradigm for
machine learning with graphs. Research on GNNs has mainly focused on the family
of message passing neural networks (MPNNs). Similar to the Weisfeiler-Leman
(WL) test of isomorphism, these models follow an iterative neighborhood
aggregation procedure to update vertex representations, and they next compute
graph representations by aggregating the representations of the vertices.
Although very successful, MPNNs have been studied intensively in the past few
years. Thus, there is a need for novel architectures which will allow research
in the field to break away from MPNNs. In this paper, we propose a new graph
neural network model, so-called $\pi$-GNN which learns a "soft" permutation
(i.e., doubly stochastic) matrix for each graph, and thus projects all graphs
into a common vector space. The learned matrices impose a "soft" ordering on
the vertices of the input graphs, and based on this ordering, the adjacency
matrices are mapped into vectors. These vectors can be fed into fully-connected
or convolutional layers to deal with supervised learning tasks. In case of
large graphs, to make the model more efficient in terms of running time and
memory, we further relax the doubly stochastic matrices to row stochastic
matrices. We empirically evaluate the model on graph classification and graph
regression datasets and show that it achieves performance competitive with
state-of-the-art models.
Related papers
- SimTeG: A Frustratingly Simple Approach Improves Textual Graph Learning [131.04781590452308]
We present SimTeG, a frustratingly Simple approach for Textual Graph learning.
We first perform supervised parameter-efficient fine-tuning (PEFT) on a pre-trained LM on the downstream task.
We then generate node embeddings using the last hidden states of finetuned LM.
arXiv Detail & Related papers (2023-08-03T07:00:04Z) - Weisfeiler and Leman go Hyperbolic: Learning Distance Preserving Node
Representations [26.77596449192451]
Graph neural networks (GNNs) have emerged as a promising tool for solving machine learning problems on graphs.
In this paper, we define a distance function between nodes which is based on the hierarchy produced by the Weisfeiler-Leman (WL) algorithm.
We propose a model that learns representations which preserve those distances between nodes.
arXiv Detail & Related papers (2022-11-04T15:03:41Z) - Dynamic Graph Message Passing Networks for Visual Recognition [112.49513303433606]
Modelling long-range dependencies is critical for scene understanding tasks in computer vision.
A fully-connected graph is beneficial for such modelling, but its computational overhead is prohibitive.
We propose a dynamic graph message passing network, that significantly reduces the computational complexity.
arXiv Detail & Related papers (2022-09-20T14:41:37Z) - Graph-level Neural Networks: Current Progress and Future Directions [61.08696673768116]
Graph-level Neural Networks (GLNNs, deep learning-based graph-level learning methods) have been attractive due to their superiority in modeling high-dimensional data.
We propose a systematic taxonomy covering GLNNs upon deep neural networks, graph neural networks, and graph pooling.
arXiv Detail & Related papers (2022-05-31T06:16:55Z) - Neural Graph Matching for Pre-training Graph Neural Networks [72.32801428070749]
Graph neural networks (GNNs) have been shown powerful capacity at modeling structural data.
We present a novel Graph Matching based GNN Pre-Training framework, called GMPT.
The proposed method can be applied to fully self-supervised pre-training and coarse-grained supervised pre-training.
arXiv Detail & Related papers (2022-03-03T09:53:53Z) - Neighbor2Seq: Deep Learning on Massive Graphs by Transforming Neighbors
to Sequences [55.329402218608365]
We propose the Neighbor2Seq to transform the hierarchical neighborhood of each node into a sequence.
We evaluate our method on a massive graph with more than 111 million nodes and 1.6 billion edges.
Results show that our proposed method is scalable to massive graphs and achieves superior performance across massive and medium-scale graphs.
arXiv Detail & Related papers (2022-02-07T16:38:36Z) - Graph Convolutional Networks with Dual Message Passing for Subgraph
Isomorphism Counting and Matching [42.55928561326902]
Graph neural networks (GNNs) and message passing neural networks (MPNNs) have been proven to be expressive for subgraph structures.
We propose dual message passing neural networks (DMPNNs) to enhance the substructure representation learning.
arXiv Detail & Related papers (2021-12-16T10:23:48Z) - Graph Kernel Neural Networks [53.91024360329517]
We propose to use graph kernels, i.e. kernel functions that compute an inner product on graphs, to extend the standard convolution operator to the graph domain.
This allows us to define an entirely structural model that does not require computing the embedding of the input graph.
Our architecture allows to plug-in any type of graph kernels and has the added benefit of providing some interpretability.
arXiv Detail & Related papers (2021-12-14T14:48:08Z) - Let's Agree to Degree: Comparing Graph Convolutional Networks in the
Message-Passing Framework [5.835421173589977]
We cast neural networks defined on graphs as message-passing neural networks (MPNNs)
We consider two variants of MPNNs: anonymous MPNNs and degree-aware MPNNs.
We obtain lower and upper bounds on the distinguishing power of MPNNs in terms of the distinguishing power of the Weisfeiler-Lehman (WL) algorithm.
arXiv Detail & Related papers (2020-04-06T12:14:00Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.