When GNNs Met a Word Equations Solver: Learning to Rank Equations (Extended Technical Report)
- URL: http://arxiv.org/abs/2506.23784v1
- Date: Mon, 30 Jun 2025 12:24:24 GMT
- Title: When GNNs Met a Word Equations Solver: Learning to Rank Equations (Extended Technical Report)
- Authors: Parosh Aziz Abdulla, Mohamed Faouzi Atig, Julie Cailler, Chencheng Liang, Philipp Rümmer,
- Abstract summary: Graph Neural Networks (GNNs) for ranking word equations before and during the solving process is explored.<n>Three approaches to adapt a multi-classification task to the problem of ranking equations are proposed.<n>The training of the GNN is done with the help of minimum unsatisfiable subsets (MUSes) of word equations.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Nielsen transformation is a standard approach for solving word equations: by repeatedly splitting equations and applying simplification steps, equations are rewritten until a solution is reached. When solving a conjunction of word equations in this way, the performance of the solver will depend considerably on the order in which equations are processed. In this work, the use of Graph Neural Networks (GNNs) for ranking word equations before and during the solving process is explored. For this, a novel graph-based representation for word equations is presented, preserving global information across conjuncts, enabling the GNN to have a holistic view during ranking. To handle the variable number of conjuncts, three approaches to adapt a multi-classification task to the problem of ranking equations are proposed. The training of the GNN is done with the help of minimum unsatisfiable subsets (MUSes) of word equations. The experimental results show that, compared to state-of-the-art string solvers, the new framework solves more problems in benchmarks where each variable appears at most once in each equation.
Related papers
- Explicit Solution Equation for Every Combinatorial Problem via Tensor Networks: MeLoCoToN [55.2480439325792]
We show that every computation problem has an exact explicit equation that returns its solution.<n>We present a method to obtain an equation that solves exactly any problem, both inversion, constraint satisfaction and optimization.
arXiv Detail & Related papers (2025-02-09T18:16:53Z) - Guiding Word Equation Solving using Graph Neural Networks (Extended Technical Report) [0.0]
This paper proposes a Graph Neural Network-guided algorithm for solving word equations.
The algorithm iteratively rewrites the first terms of each side of an equation, giving rise to a tree-like search space.
Experiments are conducted on artificial and real-world benchmarks.
arXiv Detail & Related papers (2024-11-19T14:15:34Z) - A Neural Rewriting System to Solve Algorithmic Problems [47.129504708849446]
We propose a modular architecture designed to learn a general procedure for solving nested mathematical formulas.
Inspired by rewriting systems, a classic framework in symbolic artificial intelligence, we include in the architecture three specialized and interacting modules.
We benchmark our system against the Neural Data Router, a recent model specialized for systematic generalization, and a state-of-the-art large language model (GPT-4) probed with advanced prompting strategies.
arXiv Detail & Related papers (2024-02-27T10:57:07Z) - Neural Time-Reversed Generalized Riccati Equation [60.92253836775246]
Hamiltonian equations offer an interpretation of optimality through auxiliary variables known as costates.
This paper introduces a novel neural-based approach to optimal control, with the aim of working forward-in-time.
arXiv Detail & Related papers (2023-12-14T19:29:37Z) - ComSearch: Equation Searching with Combinatorial Strategy for Solving
Math Word Problems with Weak Supervision [21.249411371568236]
We propose a novel search algorithm with strategy textbfComSearch, which can compress the search space by excluding mathematically equivalent equations.
We investigate the noise in pseudo labels that hold wrong mathematical logic, which we refer to as the textitfalse-matching problem, and propose a ranking model to denoise the pseudo labels.
arXiv Detail & Related papers (2022-10-13T13:25:22Z) - Seeking Diverse Reasoning Logic: Controlled Equation Expression
Generation for Solving Math Word Problems [21.62131402402428]
We propose a controlled equation generation solver by leveraging a set of control codes to guide the model.
Our method universally improves the performance on single-unknown (Math23K) and multiple-unknown (DRAW1K, HMWP) benchmarks.
arXiv Detail & Related papers (2022-09-21T12:43:30Z) - Learning to Solve PDE-constrained Inverse Problems with Graph Networks [51.89325993156204]
In many application domains across science and engineering, we are interested in solving inverse problems with constraints defined by a partial differential equation (PDE)
Here we explore GNNs to solve such PDE-constrained inverse problems.
We demonstrate computational speedups of up to 90x using GNNs compared to principled solvers.
arXiv Detail & Related papers (2022-06-01T18:48:01Z) - Message Passing Neural PDE Solvers [60.77761603258397]
We build a neural message passing solver, replacing allally designed components in the graph with backprop-optimized neural function approximators.
We show that neural message passing solvers representationally contain some classical methods, such as finite differences, finite volumes, and WENO schemes.
We validate our method on various fluid-like flow problems, demonstrating fast, stable, and accurate performance across different domain topologies, equation parameters, discretizations, etc., in 1D and 2D.
arXiv Detail & Related papers (2022-02-07T17:47:46Z) - Solving Arithmetic Word Problems by Scoring Equations with Recursive
Neural Networks [25.08023032443234]
Recent works use automatic extraction and ranking of candidate solution equations providing the answer to arithmetic word problems.
In this work, we explore novel approaches to score such candidate solution equations using Tree-RNN configurations.
Our proposed method consists of transforming the mathematical expression of the equation into an expression tree.
arXiv Detail & Related papers (2020-09-11T19:48:42Z) - Boosting Data Reduction for the Maximum Weight Independent Set Problem
Using Increasing Transformations [59.84561168501493]
We introduce new generalized data reduction and transformation rules for the maximum weight independent set problem.
Surprisingly, these so-called increasing transformations can simplify the problem and also open up the reduction space to yield even smaller irreducible graphs later in the algorithm.
Our algorithm computes significantly smaller irreducible graphs on all except one instance, solves more instances to optimality than previously possible, is up to two orders of magnitude faster than the best state-of-the-art solver, and finds higher-quality solutions than solvers DynWVC and HILS.
arXiv Detail & Related papers (2020-08-12T08:52:50Z) - Multipole Graph Neural Operator for Parametric Partial Differential
Equations [57.90284928158383]
One of the main challenges in using deep learning-based methods for simulating physical systems is formulating physics-based data.
We propose a novel multi-level graph neural network framework that captures interaction at all ranges with only linear complexity.
Experiments confirm our multi-graph network learns discretization-invariant solution operators to PDEs and can be evaluated in linear time.
arXiv Detail & Related papers (2020-06-16T21:56:22Z) - Symbolic Regression using Mixed-Integer Nonlinear Optimization [9.638685454900047]
The Symbolic Regression (SR) problem is a hard problem in machine learning.
We propose a hybrid algorithm that combines mixed-integer nonlinear optimization with explicit enumeration.
We show that our algorithm is competitive, for some synthetic data sets, with a state-of-the-art SR software and a recent physics-inspired method called AI Feynman.
arXiv Detail & Related papers (2020-06-11T20:53:17Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.