Mathematical Word Problem Generation from Commonsense Knowledge Graph
and Equations
- URL: http://arxiv.org/abs/2010.06196v3
- Date: Thu, 9 Sep 2021 10:27:59 GMT
- Title: Mathematical Word Problem Generation from Commonsense Knowledge Graph
and Equations
- Authors: Tianqiao Liu, Qiang Fang, Wenbiao Ding, Hang Li, Zhongqin Wu, Zitao
Liu
- Abstract summary: We develop an end-to-end neural model to generate diverse MWPs in real-world scenarios from commonsense knowledge graph and equations.
The proposed model learns both representations from edge-enhanced Levi graphs of symbolic equations and commonsense knowledge.
Experiments on an educational gold-standard set and a large-scale generated MWP set show that our approach is superior on the MWP generation task.
- Score: 27.063577644162358
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: There is an increasing interest in the use of mathematical word problem (MWP)
generation in educational assessment. Different from standard natural question
generation, MWP generation needs to maintain the underlying mathematical
operations between quantities and variables, while at the same time ensuring
the relevance between the output and the given topic. To address above problem,
we develop an end-to-end neural model to generate diverse MWPs in real-world
scenarios from commonsense knowledge graph and equations. The proposed model
(1) learns both representations from edge-enhanced Levi graphs of symbolic
equations and commonsense knowledge; (2) automatically fuses equation and
commonsense knowledge information via a self-planning module when generating
the MWPs. Experiments on an educational gold-standard set and a large-scale
generated MWP set show that our approach is superior on the MWP generation
task, and it outperforms the SOTA models in terms of both automatic evaluation
metrics, i.e., BLEU-4, ROUGE-L, Self-BLEU, and human evaluation metrics, i.e.,
equation relevance, topic relevance, and language coherence. To encourage
reproducible results, we make our code and MWP dataset public available at
\url{https://github.com/tal-ai/MaKE_EMNLP2021}.
Related papers
- VHELM: A Holistic Evaluation of Vision Language Models [75.88987277686914]
We present the Holistic Evaluation of Vision Language Models (VHELM)
VHELM aggregates various datasets to cover one or more of the 9 aspects: visual perception, knowledge, reasoning, bias, fairness, multilinguality, robustness, toxicity, and safety.
Our framework is designed to be lightweight and automatic so that evaluation runs are cheap and fast.
arXiv Detail & Related papers (2024-10-09T17:46:34Z) - Solving Math Word Problems with Reexamination [27.80592576792461]
We propose a pseudo-dual (PseDual) learning scheme to model such process, which is model-agnostic.
The pseudo-dual task is specifically defined as filling the numbers in the expression back into the original word problem with numbers masked.
Our pseudo-dual learning scheme has been tested and proven effective when being equipped in several representative MWP solvers through empirical studies.
arXiv Detail & Related papers (2023-10-14T14:23:44Z) - MWPRanker: An Expression Similarity Based Math Word Problem Retriever [12.638925774492403]
Math Word Problems (MWPs) in online assessments help test the ability of the learner to make critical inferences.
We propose a tool in this work for MWP retrieval.
arXiv Detail & Related papers (2023-07-03T15:44:18Z) - Learning by Analogy: Diverse Questions Generation in Math Word Problem [21.211970350827183]
Solving math word problem (MWP) with AI techniques has recently made great progress with the success of deep neural networks (DNN)
We argue that the ability of learning by analogy is essential for an MWP solver to better understand same problems which may typically be formulated in diverse ways.
In this paper, we make a first attempt to solve MWPs by generating diverse yet consistent questions/equations.
arXiv Detail & Related papers (2023-06-15T11:47:07Z) - Large Language Models with Controllable Working Memory [64.71038763708161]
Large language models (LLMs) have led to a series of breakthroughs in natural language processing (NLP)
What further sets these models apart is the massive amounts of world knowledge they internalize during pretraining.
How the model's world knowledge interacts with the factual information presented in the context remains under explored.
arXiv Detail & Related papers (2022-11-09T18:58:29Z) - Unbiased Math Word Problems Benchmark for Mitigating Solving Bias [72.8677805114825]
Current solvers exist solving bias which consists of data bias and learning bias due to biased dataset and improper training strategy.
Our experiments verify MWP solvers are easy to be biased by the biased training datasets which do not cover diverse questions for each problem narrative of all MWPs.
An MWP can be naturally solved by multiple equivalent equations while current datasets take only one of the equivalent equations as ground truth.
arXiv Detail & Related papers (2022-05-17T06:07:04Z) - DISK: Domain-constrained Instance Sketch for Math Word Problem
Generation [16.045655800225436]
A math word problem (MWP) is a coherent narrative which reflects the underlying logic of math equations.
Previous methods mainly generate MWP text based on inflexible pre-defined templates.
We propose a neural model for generating MWP text from math equations.
arXiv Detail & Related papers (2022-04-10T13:54:23Z) - Math Word Problem Generation with Mathematical Consistency and Problem
Context Constraints [37.493809561634386]
We study the problem of generating arithmetic math word problems (MWPs) given a math equation.
Existing approaches are prone to generating MWPs that are mathematically invalid or have unsatisfactory language quality.
arXiv Detail & Related papers (2021-09-09T20:24:25Z) - Generate & Rank: A Multi-task Framework for Math Word Problems [48.99880318686938]
Math word problem (MWP) is a challenging and critical task in natural language processing.
We propose Generate & Rank, a framework based on a generative pre-trained language model.
By joint training with generation and ranking, the model learns from its own mistakes and is able to distinguish between correct and incorrect expressions.
arXiv Detail & Related papers (2021-09-07T12:21:49Z) - MWP-BERT: A Strong Baseline for Math Word Problems [47.51572465676904]
Math word problem (MWP) solving is the task of transforming a sequence of natural language problem descriptions to executable math equations.
Although recent sequence modeling MWP solvers have gained credits on the math-text contextual understanding, pre-trained language models (PLM) have not been explored for solving MWP.
We introduce MWP-BERT to obtain pre-trained token representations that capture the alignment between text description and mathematical logic.
arXiv Detail & Related papers (2021-07-28T15:28:41Z) - Semantically-Aligned Universal Tree-Structured Solver for Math Word
Problems [129.90766822085132]
A practical automatic textual math word problems (MWPs) solver should be able to solve various textual MWPs.
We propose a simple but efficient method called Universal Expression Tree (UET) to make the first attempt to represent the equations of various MWPs uniformly.
Then a semantically-aligned universal tree-structured solver (SAU-r) based on an encoder-decoder framework is proposed to resolve multiple types of MWPs in a unified model.
arXiv Detail & Related papers (2020-10-14T06:27:07Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.