Tackling Math Word Problems with Fine-to-Coarse Abstracting and
Reasoning
- URL: http://arxiv.org/abs/2205.08274v1
- Date: Tue, 17 May 2022 12:14:44 GMT
- Title: Tackling Math Word Problems with Fine-to-Coarse Abstracting and
Reasoning
- Authors: Ailisi Li, Xueyao Jiang, Bang Liu, Jiaqing Liang, Yanghua Xiao
- Abstract summary: We propose to model a math word problem in a fine-to-coarse manner to capture both the local fine-grained information and the global logical structure of it.
Our model is naturally sensitive to local variations and can better generalize to unseen problem types.
- Score: 22.127301797950572
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Math Word Problems (MWP) is an important task that requires the ability of
understanding and reasoning over mathematical text. Existing approaches mostly
formalize it as a generation task by adopting Seq2Seq or Seq2Tree models to
encode an input math problem in natural language as a global representation and
generate the output mathematical expression. Such approaches only learn shallow
heuristics and fail to capture fine-grained variations in inputs. In this
paper, we propose to model a math word problem in a fine-to-coarse manner to
capture both the local fine-grained information and the global logical
structure of it. Instead of generating a complete equation sequence or
expression tree from the global features, we iteratively combine low-level
operands to predict a higher-level operator, abstracting the problem and
reasoning about the solving operators from bottom to up. Our model is naturally
more sensitive to local variations and can better generalize to unseen problem
types. Extensive evaluations on Math23k and SVAMP datasets demonstrate the
accuracy and robustness of our method.
Related papers
- MathCAMPS: Fine-grained Synthesis of Mathematical Problems From Human Curricula [33.5782208232163]
We propose Math CAMPS: a method to synthesize high-quality mathematical problems at scale.
We encode each standard in a formal grammar, allowing us to sample diverse symbolic problems and their answers.
We derive follow-up questions from symbolic structures and convert them into follow-up word problems.
arXiv Detail & Related papers (2024-07-01T01:56:28Z) - Math Word Problem Solving by Generating Linguistic Variants of Problem
Statements [1.742186232261139]
We propose a framework for MWP solvers based on the generation of linguistic variants of the problem text.
The approach involves solving each of the variant problems and electing the predicted expression with the majority of the votes.
We show that training on linguistic variants of problem statements and voting on candidate predictions improve the mathematical reasoning and robustness of the model.
arXiv Detail & Related papers (2023-06-24T08:27:39Z) - Highlighting Named Entities in Input for Auto-Formulation of
Optimization Problems [0.0]
This paper presents an approach that converts linear programming word problems into mathematical formulations.
We leverage the named entities in the input and augment the input to highlight these entities.
Our approach achieves the highest accuracy among all submissions to the NL4Opt Competition, securing first place in the generation track.
arXiv Detail & Related papers (2022-12-26T16:13:57Z) - UniGeo: Unifying Geometry Logical Reasoning via Reformulating
Mathematical Expression [127.68780714438103]
Two main geometry problems: calculation and proving, are usually treated as two specific tasks.
We construct a large-scale Unified Geometry problem benchmark, UniGeo, which contains 4,998 calculation problems and 9,543 proving problems.
We also present a unified multi-task Geometric Transformer framework, Geoformer, to tackle calculation and proving problems simultaneously.
arXiv Detail & Related papers (2022-12-06T04:37:51Z) - JiuZhang: A Chinese Pre-trained Language Model for Mathematical Problem
Understanding [74.12405417718054]
This paper aims to advance the mathematical intelligence of machines by presenting the first Chinese mathematical pre-trained language model(PLM)
Unlike other standard NLP tasks, mathematical texts are difficult to understand, since they involve mathematical terminology, symbols and formulas in the problem statement.
We design a novel curriculum pre-training approach for improving the learning of mathematical PLMs, consisting of both basic and advanced courses.
arXiv Detail & Related papers (2022-06-13T17:03:52Z) - Recognizing and Verifying Mathematical Equations using Multiplicative
Differential Neural Units [86.9207811656179]
We show that memory-augmented neural networks (NNs) can achieve higher-order, memory-augmented extrapolation, stable performance, and faster convergence.
Our models achieve a 1.53% average improvement over current state-of-the-art methods in equation verification and achieve a 2.22% Top-1 average accuracy and 2.96% Top-5 average accuracy for equation completion.
arXiv Detail & Related papers (2021-04-07T03:50:11Z) - Measuring Mathematical Problem Solving With the MATH Dataset [55.4376028963537]
We introduce MATH, a dataset of 12,500 challenging competition mathematics problems.
Each problem has a full step-by-step solution which can be used to teach models to generate answer derivations and explanations.
We also contribute a large auxiliary pretraining dataset which helps teach models the fundamentals of mathematics.
arXiv Detail & Related papers (2021-03-05T18:59:39Z) - Learning to Match Mathematical Statements with Proofs [37.38969121408295]
The task is designed to improve the processing of research-level mathematical texts.
We release a dataset for the task, consisting of over 180k statement-proof pairs.
We show that considering the assignment problem globally and using weighted bipartite matching algorithms helps a lot in tackling the task.
arXiv Detail & Related papers (2021-02-03T15:38:54Z) - SMART: A Situation Model for Algebra Story Problems via Attributed
Grammar [74.1315776256292]
We introduce the concept of a emphsituation model, which originates from psychology studies to represent the mental states of humans in problem-solving.
We show that the proposed model outperforms all previous neural solvers by a large margin while preserving much better interpretability.
arXiv Detail & Related papers (2020-12-27T21:03:40Z) - Learning by Fixing: Solving Math Word Problems with Weak Supervision [70.62896781438694]
Previous neural solvers of math word problems (MWPs) are learned with full supervision and fail to generate diverse solutions.
We introduce a textitweakly-supervised paradigm for learning MWPs.
Our method only requires the annotations of the final answers and can generate various solutions for a single problem.
arXiv Detail & Related papers (2020-12-19T03:10:21Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.