Scalable Pattern Matching in Computation Graphs
- URL: http://arxiv.org/abs/2402.13065v1
- Date: Tue, 20 Feb 2024 15:02:24 GMT
- Title: Scalable Pattern Matching in Computation Graphs
- Authors: Luca Mondada and Pablo Andr\'es-Mart\'inez
- Abstract summary: Graph rewriting is a popular tool for optimisation and modification of graph expressions.
We propose a new solution to pattern matching in port graphs.
Our algorithm offers a 20x speedup over current implementations on a dataset of 10'000 real world patterns.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Graph rewriting is a popular tool for the optimisation and modification of
graph expressions in domains such as compilers, machine learning and quantum
computing. The underlying data structures are often port graphs - graphs with
labels at edge endpoints. These port labels greatly simplify pattern matching.
A pre-requisite for graph rewriting is the ability to find subgraphs of the
input that match known graph identities: the pattern matching problem. We
propose a new solution to pattern matching in port graphs. Its novelty lies in
the use of a pre-computed data structure that makes the pattern matching
runtime complexity independent of the number of patterns. The runtime is bound
by the maximum width $w$ and depth $d$ of the patterns, as well as the input
graph size $|G|$ as $O(|G| \cdot c^w / w^{1/2} \cdot d)$ with $c = 6.75$. This
offers a significant advantage over existing solutions for use cases where
patterns have low width and the set of patterns is large and fixed ahead of
time.
In the context of quantum circuits, pattern width can be limited to qubit
number. Quantum superoptimisers may use thousands of rewrite rules on circuits
with less than 5 qubits, making them an ideal use case. We provide benchmarks
showing that our algorithm offers a 20x speedup over current implementations on
a dataset of 10'000 real world patterns describing quantum circuits.
Related papers
- Representation Learning for Frequent Subgraph Mining [64.32430554934021]
Subgraph Pattern Miner (SPMiner) is a novel neural approach for finding frequent subgraphs in a large target graph.
For 5- and 6-node motifs, SPMiner can almost perfectly identify the most frequent motifs while being 100x faster than exact enumeration methods.
SPMiner can also reliably identify frequent 10-node motifs, which is well beyond the size limit of exact enumeration approaches.
arXiv Detail & Related papers (2024-02-22T08:11:22Z) - SAT-Based Algorithms for Regular Graph Pattern Matching [40.86962847131912]
We propose a generalization of graph isomorphism that allows one to check complex structural properties.
This specification is given in the form of a Regular Graph Pattern (ReGaP), a special type of graph inspired by regular expressions.
We propose a SAT-based algorithm for checking if a target graph matches a given ReGaP.
arXiv Detail & Related papers (2023-12-15T18:12:44Z) - NodeFormer: A Scalable Graph Structure Learning Transformer for Node
Classification [70.51126383984555]
We introduce a novel all-pair message passing scheme for efficiently propagating node signals between arbitrary nodes.
The efficient computation is enabled by a kernerlized Gumbel-Softmax operator.
Experiments demonstrate the promising efficacy of the method in various tasks including node classification on graphs.
arXiv Detail & Related papers (2023-06-14T09:21:15Z) - The Subgraph Isomorphism Problem for Port Graphs and Quantum Circuits [0.0]
We give an algorithm to perform pattern matching in quantum circuits for many patterns simultaneously.
In the case of quantum circuits, we can express the bound obtained in terms of the maximum number of qubits.
arXiv Detail & Related papers (2023-02-13T22:09:02Z) - Learning to Count Isomorphisms with Graph Neural Networks [16.455234748896157]
Subgraph isomorphism counting is an important problem on graphs.
In this paper, we propose a novel graph neural network (GNN) called Count-GNN for subgraph isomorphism counting.
arXiv Detail & Related papers (2023-02-07T05:32:11Z) - From Bit-Parallelism to Quantum String Matching for Labelled Graphs [0.0]
Many problems that can be solved in quadratic time have bit-parallel speed-ups with factor $w$, where $w$ is the computer word size.
We show that a simple bit-parallel algorithm on such restricted family of graphs can indeed be converted into a realistic quantum algorithm.
arXiv Detail & Related papers (2023-02-06T15:14:34Z) - Rethinking Explaining Graph Neural Networks via Non-parametric Subgraph
Matching [68.35685422301613]
We propose a novel non-parametric subgraph matching framework, dubbed MatchExplainer, to explore explanatory subgraphs.
It couples the target graph with other counterpart instances and identifies the most crucial joint substructure by minimizing the node corresponding-based distance.
Experiments on synthetic and real-world datasets show the effectiveness of our MatchExplainer by outperforming all state-of-the-art parametric baselines with significant margins.
arXiv Detail & Related papers (2023-01-07T05:14:45Z) - Neighbor2Seq: Deep Learning on Massive Graphs by Transforming Neighbors
to Sequences [55.329402218608365]
We propose the Neighbor2Seq to transform the hierarchical neighborhood of each node into a sequence.
We evaluate our method on a massive graph with more than 111 million nodes and 1.6 billion edges.
Results show that our proposed method is scalable to massive graphs and achieves superior performance across massive and medium-scale graphs.
arXiv Detail & Related papers (2022-02-07T16:38:36Z) - Random Graph Matching with Improved Noise Robustness [2.294014185517203]
We propose a new algorithm for graph matching under probabilistic models.
Our algorithm recovers the underlying matching with high probability when $alpha le 1 / (log log n)C$.
This improves the condition $alpha le 1 / (log n)C$ achieved in previous work.
arXiv Detail & Related papers (2021-01-28T02:39:27Z) - Scaling Graph Neural Networks with Approximate PageRank [64.92311737049054]
We present the PPRGo model which utilizes an efficient approximation of information diffusion in GNNs.
In addition to being faster, PPRGo is inherently scalable, and can be trivially parallelized for large datasets like those found in industry settings.
We show that training PPRGo and predicting labels for all nodes in this graph takes under 2 minutes on a single machine, far outpacing other baselines on the same graph.
arXiv Detail & Related papers (2020-07-03T09:30:07Z) - Scalable Deep Generative Modeling for Sparse Graphs [105.60961114312686]
Existing deep neural methods require $Omega(n2)$ complexity by building up the adjacency matrix.
We develop a novel autoregressive model, named BiGG, that utilizes this sparsity to avoid generating the full adjacency matrix.
During training this autoregressive model can be parallelized with $O(log n)$ synchronization stages.
arXiv Detail & Related papers (2020-06-28T04:37:57Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.