Seeking Patterns, Not just Memorizing Procedures: Contrastive Learning
for Solving Math Word Problems
- URL: http://arxiv.org/abs/2110.08464v1
- Date: Sat, 16 Oct 2021 04:03:47 GMT
- Title: Seeking Patterns, Not just Memorizing Procedures: Contrastive Learning
for Solving Math Word Problems
- Authors: Zhongli Li, Wenxuan Zhang, Chao Yan, Qingyu Zhou, Chao Li, Hongzhi
Liu, Yunbo Cao
- Abstract summary: We investigate how a neural network understands patterns only from semantics.
We propose a contrastive learning approach, where the neural network perceives the divergence of patterns.
Our method greatly improves the performance in monolingual and multilingual settings.
- Score: 14.144577791030853
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Math Word Problem (MWP) solving needs to discover the quantitative
relationships over natural language narratives. Recent work shows that existing
models memorize procedures from context and rely on shallow heuristics to solve
MWPs. In this paper, we look at this issue and argue that the cause is a lack
of overall understanding of MWP patterns. We first investigate how a neural
network understands patterns only from semantics, and observe that, if the
prototype equations are the same, most problems get closer representations and
those representations apart from them or close to other prototypes tend to
produce wrong solutions. Inspired by it, we propose a contrastive learning
approach, where the neural network perceives the divergence of patterns. We
collect contrastive examples by converting the prototype equation into a tree
and seeking similar tree structures. The solving model is trained with an
auxiliary objective on the collected examples, resulting in the representations
of problems with similar prototypes being pulled closer. We conduct experiments
on the Chinese dataset Math23k and the English dataset MathQA. Our method
greatly improves the performance in monolingual and multilingual settings.
Related papers
- Do Language Models Exhibit the Same Cognitive Biases in Problem Solving as Human Learners? [140.9751389452011]
We study the biases of large language models (LLMs) in relation to those known in children when solving arithmetic word problems.
We generate a novel set of word problems for each of these tests, using a neuro-symbolic approach that enables fine-grained control over the problem features.
arXiv Detail & Related papers (2024-01-31T18:48:20Z) - Solving Math Word Problems with Reexamination [27.80592576792461]
We propose a pseudo-dual (PseDual) learning scheme to model such process, which is model-agnostic.
The pseudo-dual task is specifically defined as filling the numbers in the expression back into the original word problem with numbers masked.
Our pseudo-dual learning scheme has been tested and proven effective when being equipped in several representative MWP solvers through empirical studies.
arXiv Detail & Related papers (2023-10-14T14:23:44Z) - Large Language Models as Analogical Reasoners [155.9617224350088]
Chain-of-thought (CoT) prompting for language models demonstrates impressive performance across reasoning tasks.
We introduce a new prompting approach, analogical prompting, designed to automatically guide the reasoning process of large language models.
arXiv Detail & Related papers (2023-10-03T00:57:26Z) - MWPRanker: An Expression Similarity Based Math Word Problem Retriever [12.638925774492403]
Math Word Problems (MWPs) in online assessments help test the ability of the learner to make critical inferences.
We propose a tool in this work for MWP retrieval.
arXiv Detail & Related papers (2023-07-03T15:44:18Z) - Math Word Problem Solving by Generating Linguistic Variants of Problem
Statements [1.742186232261139]
We propose a framework for MWP solvers based on the generation of linguistic variants of the problem text.
The approach involves solving each of the variant problems and electing the predicted expression with the majority of the votes.
We show that training on linguistic variants of problem statements and voting on candidate predictions improve the mathematical reasoning and robustness of the model.
arXiv Detail & Related papers (2023-06-24T08:27:39Z) - Textual Enhanced Contrastive Learning for Solving Math Word Problems [23.196339273292246]
We propose a Textual Enhanced Contrastive Learning framework, which enforces the models to distinguish semantically similar examples.
We adopt a self-supervised manner strategy to enrich examples with subtle textual variance.
Experimental results show that our method achieves state-of-the-art on both widely used benchmark datasets and also exquisitely designed challenge datasets in English and Chinese.
arXiv Detail & Related papers (2022-11-29T08:44:09Z) - A Causal Framework to Quantify the Robustness of Mathematical Reasoning
with Language Models [81.15974174627785]
We study the behavior of language models in terms of robustness and sensitivity to direct interventions in the input space.
Our analysis shows that robustness does not appear to continuously improve as a function of size, but the GPT-3 Davinci models (175B) achieve a dramatic improvement in both robustness and sensitivity compared to all other GPT variants.
arXiv Detail & Related papers (2022-10-21T15:12:37Z) - Text and Patterns: For Effective Chain of Thought, It Takes Two to Tango [11.344587937052697]
This work initiates the preliminary steps towards a deeper understanding of reasoning mechanisms in large language models.
Our work centers around querying the model while controlling for all but one of the components in a prompt: symbols, patterns, and text.
We posit that text imbues patterns with commonsense knowledge and meaning.
arXiv Detail & Related papers (2022-09-16T02:54:00Z) - Unnatural Language Inference [48.45003475966808]
We find that state-of-the-art NLI models, such as RoBERTa and BART, are invariant to, and sometimes even perform better on, examples with randomly reordered words.
Our findings call into question the idea that our natural language understanding models, and the tasks used for measuring their progress, genuinely require a human-like understanding of syntax.
arXiv Detail & Related papers (2020-12-30T20:40:48Z) - Machine Number Sense: A Dataset of Visual Arithmetic Problems for
Abstract and Relational Reasoning [95.18337034090648]
We propose a dataset, Machine Number Sense (MNS), consisting of visual arithmetic problems automatically generated using a grammar model--And-Or Graph (AOG)
These visual arithmetic problems are in the form of geometric figures.
We benchmark the MNS dataset using four predominant neural network models as baselines in this visual reasoning task.
arXiv Detail & Related papers (2020-04-25T17:14:58Z) - A Simple Joint Model for Improved Contextual Neural Lemmatization [60.802451210656805]
We present a simple joint neural model for lemmatization and morphological tagging that achieves state-of-the-art results on 20 languages.
Our paper describes the model in addition to training and decoding procedures.
arXiv Detail & Related papers (2019-04-04T02:03:19Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.