Learning to Count Isomorphisms with Graph Neural Networks
- URL: http://arxiv.org/abs/2302.03266v1
- Date: Tue, 7 Feb 2023 05:32:11 GMT
- Title: Learning to Count Isomorphisms with Graph Neural Networks
- Authors: Xingtong Yu, Zemin Liu, Yuan Fang, Xinming Zhang
- Abstract summary: Subgraph isomorphism counting is an important problem on graphs.
In this paper, we propose a novel graph neural network (GNN) called Count-GNN for subgraph isomorphism counting.
- Score: 16.455234748896157
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Subgraph isomorphism counting is an important problem on graphs, as many
graph-based tasks exploit recurring subgraph patterns. Classical methods
usually boil down to a backtracking framework that needs to navigate a huge
search space with prohibitive computational costs. Some recent studies resort
to graph neural networks (GNNs) to learn a low-dimensional representation for
both the query and input graphs, in order to predict the number of subgraph
isomorphisms on the input graph. However, typical GNNs employ a node-centric
message passing scheme that receives and aggregates messages on nodes, which is
inadequate in complex structure matching for isomorphism counting. Moreover, on
an input graph, the space of possible query graphs is enormous, and different
parts of the input graph will be triggered to match different queries. Thus,
expecting a fixed representation of the input graph to match diversely
structured query graphs is unrealistic. In this paper, we propose a novel GNN
called Count-GNN for subgraph isomorphism counting, to deal with the above
challenges. At the edge level, given that an edge is an atomic unit of encoding
graph structures, we propose an edge-centric message passing scheme, where
messages on edges are propagated and aggregated based on the edge adjacency to
preserve fine-grained structural information. At the graph level, we modulate
the input graph representation conditioned on the query, so that the input
graph can be adapted to each query individually to improve their matching.
Finally, we conduct extensive experiments on a number of benchmark datasets to
demonstrate the superior performance of Count-GNN.
Related papers
- Learning on Large Graphs using Intersecting Communities [13.053266613831447]
MPNNs iteratively update each node's representation in an input graph by aggregating messages from the node's neighbors.
MPNNs might quickly become prohibitive for large graphs provided they are not very sparse.
We propose approximating the input graph as an intersecting community graph (ICG) -- a combination of intersecting cliques.
arXiv Detail & Related papers (2024-05-31T09:26:26Z) - SPGNN: Recognizing Salient Subgraph Patterns via Enhanced Graph Convolution and Pooling [25.555741218526464]
Graph neural networks (GNNs) have revolutionized the field of machine learning on non-Euclidean data such as graphs and networks.
We propose a concatenation-based graph convolution mechanism that injectively updates node representations.
We also design a novel graph pooling module, called WL-SortPool, to learn important subgraph patterns in a deep-learning manner.
arXiv Detail & Related papers (2024-04-21T13:11:59Z) - NodeFormer: A Scalable Graph Structure Learning Transformer for Node
Classification [70.51126383984555]
We introduce a novel all-pair message passing scheme for efficiently propagating node signals between arbitrary nodes.
The efficient computation is enabled by a kernerlized Gumbel-Softmax operator.
Experiments demonstrate the promising efficacy of the method in various tasks including node classification on graphs.
arXiv Detail & Related papers (2023-06-14T09:21:15Z) - Seq-HGNN: Learning Sequential Node Representation on Heterogeneous Graph [57.2953563124339]
We propose a novel heterogeneous graph neural network with sequential node representation, namely Seq-HGNN.
We conduct extensive experiments on four widely used datasets from Heterogeneous Graph Benchmark (HGB) and Open Graph Benchmark (OGB)
arXiv Detail & Related papers (2023-05-18T07:27:18Z) - FoSR: First-order spectral rewiring for addressing oversquashing in GNNs [0.0]
Graph neural networks (GNNs) are able to leverage the structure of graph data by passing messages along the edges of the graph.
We propose a computationally efficient algorithm that prevents oversquashing by systematically adding edges to the graph.
We find experimentally that our algorithm outperforms existing graph rewiring methods in several graph classification tasks.
arXiv Detail & Related papers (2022-10-21T07:58:03Z) - Neural Graph Matching for Pre-training Graph Neural Networks [72.32801428070749]
Graph neural networks (GNNs) have been shown powerful capacity at modeling structural data.
We present a novel Graph Matching based GNN Pre-Training framework, called GMPT.
The proposed method can be applied to fully self-supervised pre-training and coarse-grained supervised pre-training.
arXiv Detail & Related papers (2022-03-03T09:53:53Z) - Interactive Visual Pattern Search on Graph Data via Graph Representation
Learning [20.795511688640296]
We propose a visual analytics system GraphQ to support human-in-the-loop, example-based, subgraph pattern search.
To support fast, interactive queries, we use graph neural networks (GNNs) to encode a graph as fixed-length latent vector representation.
We also propose a novel GNN for node-alignment called NeuroAlign to facilitate easy validation and interpretation of the query results.
arXiv Detail & Related papers (2022-02-18T22:30:28Z) - Scalable Graph Neural Networks for Heterogeneous Graphs [12.44278942365518]
Graph neural networks (GNNs) are a popular class of parametric model for learning over graph-structured data.
Recent work has argued that GNNs primarily use the graph for feature smoothing, and have shown competitive results on benchmark tasks.
In this work, we ask whether these results can be extended to heterogeneous graphs, which encode multiple types of relationship between different entities.
arXiv Detail & Related papers (2020-11-19T06:03:35Z) - Line Graph Neural Networks for Link Prediction [71.00689542259052]
We consider the graph link prediction task, which is a classic graph analytical problem with many real-world applications.
In this formalism, a link prediction problem is converted to a graph classification task.
We propose to seek a radically different and novel path by making use of the line graphs in graph theory.
In particular, each node in a line graph corresponds to a unique edge in the original graph. Therefore, link prediction problems in the original graph can be equivalently solved as a node classification problem in its corresponding line graph, instead of a graph classification task.
arXiv Detail & Related papers (2020-10-20T05:54:31Z) - Inverse Graph Identification: Can We Identify Node Labels Given Graph
Labels? [89.13567439679709]
Graph Identification (GI) has long been researched in graph learning and is essential in certain applications.
This paper defines a novel problem dubbed Inverse Graph Identification (IGI)
We propose a simple yet effective method that makes the node-level message passing process using Graph Attention Network (GAT) under the protocol of GI.
arXiv Detail & Related papers (2020-07-12T12:06:17Z) - Graph Pooling with Node Proximity for Hierarchical Representation
Learning [80.62181998314547]
We propose a novel graph pooling strategy that leverages node proximity to improve the hierarchical representation learning of graph data with their multi-hop topology.
Results show that the proposed graph pooling strategy is able to achieve state-of-the-art performance on a collection of public graph classification benchmark datasets.
arXiv Detail & Related papers (2020-06-19T13:09:44Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.