Equivalence of Dataflow Graphs via Rewrite Rules Using a
Graph-to-Sequence Neural Model
- URL: http://arxiv.org/abs/2002.06799v2
- Date: Thu, 3 Jun 2021 21:40:34 GMT
- Title: Equivalence of Dataflow Graphs via Rewrite Rules Using a
Graph-to-Sequence Neural Model
- Authors: Steve Kommrusch, Th\'eo Barollet, Louis-No\"el Pouchet
- Abstract summary: We formalize the problem of equivalence between two programs as finding a set of semantics-preserving rewrite rules from one into the other.
We then develop the first graph-to-sequence neural network system for program equivalence.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In this work we target the problem of provably computing the equivalence
between two programs represented as dataflow graphs. To this end, we formalize
the problem of equivalence between two programs as finding a set of
semantics-preserving rewrite rules from one into the other, such that after the
rewrite the two programs are structurally identical, and therefore trivially
equivalent. We then develop the first graph-to-sequence neural network system
for program equivalence, trained to produce such rewrite sequences from a
carefully crafted automatic example generation algorithm. We extensively
evaluate our system on a rich multi-type linear algebra expression language,
using arbitrary combinations of 100+ graph-rewriting axioms of equivalence. Our
system outputs via inference a correct rewrite sequence for 96% of the 10,000
program pairs isolated for testing, using 30-term programs. And in all cases,
the validity of the sequence produced and therefore the provable assertion of
program equivalence is computable, in negligible time.
Related papers
- Maximum Independent Set: Self-Training through Dynamic Programming [56.670639478539485]
This work presents a graph neural network (GNN) framework for solving the maximum independent set (MIS) problem, inspired by dynamic programming (DP)
We propose a DP-like recursive algorithm based on GNNs that firstly constructs two smaller sub-graphs, predicts the one with the larger MIS, and then uses it in the next recursion call.
Annotating comparisons of different graphs concerning their MIS size leads to a self-training process that results in more accurate self-annotation of the comparisons and vice versa.
arXiv Detail & Related papers (2023-10-28T10:58:25Z) - Hierarchical Phrase-based Sequence-to-Sequence Learning [94.10257313923478]
We describe a neural transducer that maintains the flexibility of standard sequence-to-sequence (seq2seq) models while incorporating hierarchical phrases as a source of inductive bias during training and as explicit constraints during inference.
Our approach trains two models: a discriminative derivation based on a bracketing grammar whose tree hierarchically aligns source and target phrases, and a neural seq2seq model that learns to translate the aligned phrases one-by-one.
arXiv Detail & Related papers (2022-11-15T05:22:40Z) - Reinforcement Learning Based Query Vertex Ordering Model for Subgraph
Matching [58.39970828272366]
Subgraph matching algorithms enumerate all is embeddings of a query graph in a data graph G.
matching order plays a critical role in time efficiency of these backtracking based subgraph matching algorithms.
In this paper, for the first time we apply the Reinforcement Learning (RL) and Graph Neural Networks (GNNs) techniques to generate the high-quality matching order for subgraph matching algorithms.
arXiv Detail & Related papers (2022-01-25T00:10:03Z) - Binary Diffing as a Network Alignment Problem via Belief Propagation [0.0]
We introduce a new formulation of this problem as a particular instance of a graph edit problem over the call graphs of the programs.
We show that this formulation is equivalent to a network alignment problem.
We implement a prototype of our method, called QBinDiff, and propose an extensive evaluation which shows that our approach outperforms state of the art diffing tools.
arXiv Detail & Related papers (2021-12-31T07:54:11Z) - Self-Supervised Learning to Prove Equivalence Between Straight-Line
Programs via Rewrite Rules [9.1570563482476]
Two programs are equivalent if there exists a sequence of application of rewrite rules that leads to rewriting one program into the other.
We propose a neural network architecture based on a transformer model to generate proofs of equivalence between program pairs.
Our system, S4Eq, achieves 97% proof success on a curated dataset of 10,000 pairs of equivalent programs.
arXiv Detail & Related papers (2021-09-22T01:37:08Z) - Enforcing Consistency in Weakly Supervised Semantic Parsing [68.2211621631765]
We explore the use of consistency between the output programs for related inputs to reduce the impact of spurious programs.
We find that a more consistent formalism leads to improved model performance even without consistency-based training.
arXiv Detail & Related papers (2021-07-13T03:48:04Z) - Online Matching in Sparse Random Graphs: Non-Asymptotic Performances of
Greedy Algorithm [20.582965700659788]
We estimate the competitive ratio of the simplest algorithm, GREEDY, by approximating some relevant discrete processes by their continuous counterparts.
We prove that, quite surprisingly, GREEDY can have better performance guarantees than RANKING, another celebrated algorithm for online matching.
arXiv Detail & Related papers (2021-07-02T12:18:19Z) - Structured Reordering for Modeling Latent Alignments in Sequence
Transduction [86.94309120789396]
We present an efficient dynamic programming algorithm performing exact marginal inference of separable permutations.
The resulting seq2seq model exhibits better systematic generalization than standard models on synthetic problems and NLP tasks.
arXiv Detail & Related papers (2021-06-06T21:53:54Z) - Proving Equivalence Between Complex Expressions Using Graph-to-Sequence
Neural Models [0.0]
We develop a graph-to-sequence neural network system for program equivalence.
We extensively evaluate our system on a rich multi-type linear algebra expression language.
Our machine learning system guarantees correctness for all true negatives, and ensures 0 false positive by design.
arXiv Detail & Related papers (2021-06-01T20:45:42Z) - Learning Differentiable Programs with Admissible Neural Heuristics [43.54820901841979]
We study the problem of learning differentiable functions expressed as programs in a domain-specific language.
We frame this optimization problem as a search in a weighted graph whose paths encode top-down derivations of program syntax.
Our key innovation is to view various classes of neural networks as continuous relaxations over the space of programs.
arXiv Detail & Related papers (2020-07-23T16:07:39Z) - Pseudo-Convolutional Policy Gradient for Sequence-to-Sequence
Lip-Reading [96.48553941812366]
Lip-reading aims to infer the speech content from the lip movement sequence.
Traditional learning process of seq2seq models suffers from two problems.
We propose a novel pseudo-convolutional policy gradient (PCPG) based method to address these two problems.
arXiv Detail & Related papers (2020-03-09T09:12:26Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.