Seeking Diverse Reasoning Logic: Controlled Equation Expression
Generation for Solving Math Word Problems
- URL: http://arxiv.org/abs/2209.10310v1
- Date: Wed, 21 Sep 2022 12:43:30 GMT
- Title: Seeking Diverse Reasoning Logic: Controlled Equation Expression
Generation for Solving Math Word Problems
- Authors: Yibin Shen, Qianying Liu, Zhuoyuan Mao, Zhen Wan, Fei Cheng and Sadao
Kurohashi
- Abstract summary: We propose a controlled equation generation solver by leveraging a set of control codes to guide the model.
Our method universally improves the performance on single-unknown (Math23K) and multiple-unknown (DRAW1K, HMWP) benchmarks.
- Score: 21.62131402402428
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: To solve Math Word Problems, human students leverage diverse reasoning logic
that reaches different possible equation solutions. However, the mainstream
sequence-to-sequence approach of automatic solvers aims to decode a fixed
solution equation supervised by human annotation. In this paper, we propose a
controlled equation generation solver by leveraging a set of control codes to
guide the model to consider certain reasoning logic and decode the
corresponding equations expressions transformed from the human reference. The
empirical results suggest that our method universally improves the performance
on single-unknown (Math23K) and multiple-unknown (DRAW1K, HMWP) benchmarks,
with substantial improvements up to 13.2% accuracy on the challenging
multiple-unknown datasets.
Related papers
- Deep Learning Methods for S Shaped Utility Maximisation with a Random Reference Point [0.0]
We develop several numerical methods for solving the problem using deep learning and duality methods.
We use deep learning methods to solve the associated Hamilton-Jacobi-Bellman equation for both the primal and dual problems.
We compare the solution of this non-concave problem to that of concavified utility, a random function depending on the benchmark, in both complete and incomplete markets.
arXiv Detail & Related papers (2024-10-07T22:07:59Z) - Deep Generative Symbolic Regression [83.04219479605801]
Symbolic regression aims to discover concise closed-form mathematical equations from data.
Existing methods, ranging from search to reinforcement learning, fail to scale with the number of input variables.
We propose an instantiation of our framework, Deep Generative Symbolic Regression.
arXiv Detail & Related papers (2023-12-30T17:05:31Z) - Improved identification accuracy in equation learning via comprehensive
$\boldsymbol{R^2}$-elimination and Bayesian model selection [0.0]
We present an approach that strikes a balance between comprehensiveness and efficiency in equation learning.
Inspired by stepwise regression, our approach combines the coefficient of determination, $R2$, and the Bayesian model evidence, $p(boldsymbol y|mathcal M)$, in a novel way.
arXiv Detail & Related papers (2023-11-22T09:31:19Z) - A Hybrid System for Systematic Generalization in Simple Arithmetic
Problems [70.91780996370326]
We propose a hybrid system capable of solving arithmetic problems that require compositional and systematic reasoning over sequences of symbols.
We show that the proposed system can accurately solve nested arithmetical expressions even when trained only on a subset including the simplest cases.
arXiv Detail & Related papers (2023-06-29T18:35:41Z) - Discovering ordinary differential equations that govern time-series [65.07437364102931]
We propose a transformer-based sequence-to-sequence model that recovers scalar autonomous ordinary differential equations (ODEs) in symbolic form from time-series data of a single observed solution of the ODE.
Our method is efficiently scalable: after one-time pretraining on a large set of ODEs, we can infer the governing laws of a new observed solution in a few forward passes of the model.
arXiv Detail & Related papers (2022-11-05T07:07:58Z) - Symbolic Recovery of Differential Equations: The Identifiability Problem [52.158782751264205]
Symbolic recovery of differential equations is the ambitious attempt at automating the derivation of governing equations.
We provide both necessary and sufficient conditions for a function to uniquely determine the corresponding differential equation.
We then use our results to devise numerical algorithms aiming to determine whether a function solves a differential equation uniquely.
arXiv Detail & Related papers (2022-10-15T17:32:49Z) - Neural-Symbolic Solver for Math Word Problems with Auxiliary Tasks [130.70449023574537]
Our NS-r consists of a problem reader to encode problems, a programmer to generate symbolic equations, and a symbolic executor to obtain answers.
Along with target expression supervision, our solver is also optimized via 4 new auxiliary objectives to enforce different symbolic reasoning.
arXiv Detail & Related papers (2021-07-03T13:14:58Z) - Recognizing and Verifying Mathematical Equations using Multiplicative
Differential Neural Units [86.9207811656179]
We show that memory-augmented neural networks (NNs) can achieve higher-order, memory-augmented extrapolation, stable performance, and faster convergence.
Our models achieve a 1.53% average improvement over current state-of-the-art methods in equation verification and achieve a 2.22% Top-1 average accuracy and 2.96% Top-5 average accuracy for equation completion.
arXiv Detail & Related papers (2021-04-07T03:50:11Z) - Stable Implementation of Probabilistic ODE Solvers [27.70274403550477]
Probabilistic solvers for ordinary differential equations (ODEs) provide efficient quantification of numerical uncertainty.
They suffer from numerical instability when run at high order or with small step-sizes.
The present work proposes and examines a solution to this problem.
It involves three components: accurate initialisation, a coordinate change preconditioner that makes numerical stability concerns step-size-independent, and square-root implementation.
arXiv Detail & Related papers (2020-12-18T08:35:36Z) - Deep Learning for Constrained Utility Maximisation [0.0]
This paper proposes two algorithms for solving control problems with deep learning.
The first algorithm solves Markovian problems via the Hamilton Jacobi Bellman equation.
The second uses the full power of the duality method to solve non-Markovian problems.
arXiv Detail & Related papers (2020-08-26T18:40:57Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.