ComSearch: Equation Searching with Combinatorial Strategy for Solving
Math Word Problems with Weak Supervision
- URL: http://arxiv.org/abs/2210.07017v1
- Date: Thu, 13 Oct 2022 13:25:22 GMT
- Title: ComSearch: Equation Searching with Combinatorial Strategy for Solving
Math Word Problems with Weak Supervision
- Authors: Qianying Liu, Wenyu Guan, Jianhao Shen, Fei Cheng, Sadao Kurohashi
- Abstract summary: We propose a novel search algorithm with strategy textbfComSearch, which can compress the search space by excluding mathematically equivalent equations.
We investigate the noise in pseudo labels that hold wrong mathematical logic, which we refer to as the textitfalse-matching problem, and propose a ranking model to denoise the pseudo labels.
- Score: 21.249411371568236
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Previous studies have introduced a weakly-supervised paradigm for solving
math word problems requiring only the answer value annotation. While these
methods search for correct value equation candidates as pseudo labels, they
search among a narrow sub-space of the enormous equation space. To address this
problem, we propose a novel search algorithm with combinatorial strategy
\textbf{ComSearch}, which can compress the search space by excluding
mathematically equivalent equations. The compression allows the searching
algorithm to enumerate all possible equations and obtain high-quality data. We
investigate the noise in the pseudo labels that hold wrong mathematical logic,
which we refer to as the \textit{false-matching} problem, and propose a ranking
model to denoise the pseudo labels. Our approach holds a flexible framework to
utilize two existing supervised math word problem solvers to train pseudo
labels, and both achieve state-of-the-art performance in the weak supervision
task.
Related papers
- Guiding Word Equation Solving using Graph Neural Networks (Extended Technical Report) [0.0]
This paper proposes a Graph Neural Network-guided algorithm for solving word equations.
The algorithm iteratively rewrites the first terms of each side of an equation, giving rise to a tree-like search space.
Experiments are conducted on artificial and real-world benchmarks.
arXiv Detail & Related papers (2024-11-19T14:15:34Z) - A Three-Stage Algorithm for the Closest String Problem on Artificial and Real Gene Sequences [39.58317527488534]
Closest String Problem is an NP-hard problem that aims to find a string that has the minimum distance from all sequences that belong to the given set of strings.
In this paper, we introduce a three-stage algorithm that comprises the following process: first, we apply a novel alphabet pruning method to reduce the search space for effectively finding promising search regions.
Second, a variant of beam search to find a solution is employed. This method utilizes a newly developed guiding function based on an expected distance score of partial solutions.
arXiv Detail & Related papers (2024-07-17T21:26:27Z) - Solving Math Word Problems by Combining Language Models With Symbolic
Solvers [28.010617102877923]
Large language models (LLMs) can be combined with external tools to perform complex reasoning and calculation.
We propose an approach that combines an LLM that can incrementally formalize word problems as a set of variables and equations with an external symbolic solver.
Our approach achieves comparable accuracy to the original PAL on the GSM8K benchmark of math word problems and outperforms PAL by an absolute 20% on ALGEBRA.
arXiv Detail & Related papers (2023-04-16T04:16:06Z) - On graph-based reentrancy-free semantic parsing [5.228711636020665]
We propose a graph-based approach for semantic parsing that resolves two problems observed in the literature.
We prove that both MAP inference and latent tag anchoring (required for weakly-supervised learning) are NP-hard problems.
We propose two optimization algorithms based on constraint smoothing and conditional gradient to approximately solve these inference problems.
arXiv Detail & Related papers (2023-02-15T14:14:09Z) - Deep Probabilistic Graph Matching [72.6690550634166]
We propose a deep learning-based graph matching framework that works for the original QAP without compromising on the matching constraints.
The proposed method is evaluated on three popularly tested benchmarks (Pascal VOC, Willow Object and SPair-71k) and it outperforms all previous state-of-the-arts on all benchmarks.
arXiv Detail & Related papers (2022-01-05T13:37:27Z) - Neural-Symbolic Solver for Math Word Problems with Auxiliary Tasks [130.70449023574537]
Our NS-r consists of a problem reader to encode problems, a programmer to generate symbolic equations, and a symbolic executor to obtain answers.
Along with target expression supervision, our solver is also optimized via 4 new auxiliary objectives to enforce different symbolic reasoning.
arXiv Detail & Related papers (2021-07-03T13:14:58Z) - Determinantal Beam Search [75.84501052642361]
Beam search is a go-to strategy for decoding neural sequence models.
In use-cases that call for multiple solutions, a diverse or representative set is often desired.
By posing iterations in beam search as a series of subdeterminant problems, we can turn the algorithm into a diverse subset selection process.
arXiv Detail & Related papers (2021-06-14T13:01:46Z) - Towards Optimally Efficient Tree Search with Deep Learning [76.64632985696237]
This paper investigates the classical integer least-squares problem which estimates signals integer from linear models.
The problem is NP-hard and often arises in diverse applications such as signal processing, bioinformatics, communications and machine learning.
We propose a general hyper-accelerated tree search (HATS) algorithm by employing a deep neural network to estimate the optimal estimation for the underlying simplified memory-bounded A* algorithm.
arXiv Detail & Related papers (2021-01-07T08:00:02Z) - Learning by Fixing: Solving Math Word Problems with Weak Supervision [70.62896781438694]
Previous neural solvers of math word problems (MWPs) are learned with full supervision and fail to generate diverse solutions.
We introduce a textitweakly-supervised paradigm for learning MWPs.
Our method only requires the annotations of the final answers and can generate various solutions for a single problem.
arXiv Detail & Related papers (2020-12-19T03:10:21Z) - Learning to Accelerate Heuristic Searching for Large-Scale Maximum
Weighted b-Matching Problems in Online Advertising [51.97494906131859]
Bipartite b-matching is fundamental in algorithm design, and has been widely applied into economic markets, labor markets, etc.
Existing exact and approximate algorithms usually fail in such settings due to either requiring intolerable running time or too much computation resource.
We propose textttNeuSearcher which leverages the knowledge learned from previously instances to solve new problem instances.
arXiv Detail & Related papers (2020-05-09T02:48:23Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.