MARIO: MAth Reasoning with code Interpreter Output -- A Reproducible
Pipeline
- URL: http://arxiv.org/abs/2401.08190v3
- Date: Wed, 21 Feb 2024 20:28:13 GMT
- Title: MARIO: MAth Reasoning with code Interpreter Output -- A Reproducible
Pipeline
- Authors: Minpeng Liao, Wei Luo, Chengxi Li, Jing Wu, Kai Fan
- Abstract summary: We postulate that the inherent nature of large language models (LLMs) presents challenges in modeling mathematical reasoning.
This paper introduces a novel math dataset, enhanced with a capability to utilize a Python code interpreter.
We propose a tentative, easily replicable protocol for the fine-tuning of math-specific LLMs.
- Score: 12.186691561822256
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Large language models (LLMs) have seen considerable advancements in natural
language understanding tasks, yet there remains a gap to bridge before
attaining true artificial general intelligence, especially concerning
shortcomings in mathematical reasoning capabilities. We postulate that the
inherent nature of LLM training, which focuses on predicting probabilities of
next token, presents challenges in effectively modeling mathematical reasoning
that demands exact calculations, both from data-driven and theoretical
standpoints. In this paper, we address this challenge by enriching the data
landscape and introducing a novel math dataset, enhanced with a capability to
utilize a Python code interpreter. This dataset is derived from GSM8K and MATH
and has been further refined through a combination of GPT-4 annotations, human
review, and self-training processes, where the errors in the original GSM8K
training set have been fixed. Additionally, we propose a tentative, easily
replicable protocol for the fine-tuning of math-specific LLMs, which has led to
a significant improvement in the performance of a 7B-parameter LLM on the GSM8K
and MATH datasets. We are committed to advancing the field of mathematical
reasoning in LLMs and, to that end, we have made source code for data
generation / training / inference, and the model checkpoints publicly available
at \url{https://github.com/MARIO-Math-Reasoning/MARIO}. We hope this will
facilitate further research and development within the community.
Related papers
- SIaM: Self-Improving Code-Assisted Mathematical Reasoning of Large Language Models [54.78329741186446]
We propose a novel paradigm that uses a code-based critic model to guide steps including question-code data construction, quality control, and complementary evaluation.
Experiments across both in-domain and out-of-domain benchmarks in English and Chinese demonstrate the effectiveness of the proposed paradigm.
arXiv Detail & Related papers (2024-08-28T06:33:03Z) - Reliable Reasoning Beyond Natural Language [0.047888359248129786]
Large Language models (LLMs) often exhibit limitations in their ability to reason reliably and flexibly.
We propose a neurosymbolic approach that prompts LLMs to extract and encode all relevant information from a problem statement as logical code statements.
We then use a logic programming language (Prolog) to conduct the iterative computations of explicit deductive reasoning.
arXiv Detail & Related papers (2024-07-16T04:34:18Z) - MindStar: Enhancing Math Reasoning in Pre-trained LLMs at Inference Time [51.5039731721706]
MindStar is a purely inference-based searching method for large language models.
It formulates reasoning tasks as searching problems and proposes two search ideas to identify the optimal reasoning paths.
It significantly enhances the reasoning abilities of open-source models, such as Llama-2-13B and Mistral-7B, and achieves comparable performance to GPT-3.5 and Grok-1.
arXiv Detail & Related papers (2024-05-25T15:07:33Z) - JiuZhang3.0: Efficiently Improving Mathematical Reasoning by Training Small Data Synthesis Models [110.45794710162241]
Existing work either collects large-scale math-related texts for pre-training, or relies on stronger LLMs to synthesize massive math problems.
We propose an efficient way that trains a small LLM for math problem synthesis, to efficiently generate sufficient high-quality pre-training data.
We leverage it to synthesize 6 million math problems for pre-training our JiuZhang3.0 model, which only needs to invoke GPT-4 API 9.3k times and pre-train on 4.6B data.
arXiv Detail & Related papers (2024-05-23T09:43:19Z) - ChatGLM-Math: Improving Math Problem-Solving in Large Language Models with a Self-Critique Pipeline [42.61538071832468]
Large language models (LLMs) have shown excellent mastering of human language, but still struggle in real-world applications that require mathematical problem-solving.
We tailor the Self-Critique pipeline, which addresses the challenge in the feedback learning stage of LLM alignment.
arXiv Detail & Related papers (2024-04-03T17:51:18Z) - An Empirical Study of Data Ability Boundary in LLMs' Math Reasoning [13.11991777772918]
Large language models (LLMs) are displaying emergent abilities for math reasoning tasks.
In this paper, we aim to explore a general data strategy for supervised data to help optimize and expand math reasoning ability.
arXiv Detail & Related papers (2024-02-23T17:38:43Z) - MuSR: Testing the Limits of Chain-of-thought with Multistep Soft Reasoning [63.80739044622555]
We introduce MuSR, a dataset for evaluating language models on soft reasoning tasks specified in a natural language narrative.
This dataset has two crucial features. First, it is created through a novel neurosymbolic synthetic-to-natural generation algorithm.
Second, our dataset instances are free text narratives corresponding to real-world domains of reasoning.
arXiv Detail & Related papers (2023-10-24T17:59:20Z) - Simultaneous Machine Translation with Large Language Models [51.470478122113356]
We investigate the possibility of applying Large Language Models to SimulMT tasks.
We conducted experiments using the textttLlama2-7b-chat model on nine different languages from the MUST-C dataset.
The results show that LLM outperforms dedicated MT models in terms of BLEU and LAAL metrics.
arXiv Detail & Related papers (2023-09-13T04:06:47Z) - CodeGen2: Lessons for Training LLMs on Programming and Natural Languages [116.74407069443895]
We unify encoder and decoder-based models into a single prefix-LM.
For learning methods, we explore the claim of a "free lunch" hypothesis.
For data distributions, the effect of a mixture distribution and multi-epoch training of programming and natural languages on model performance is explored.
arXiv Detail & Related papers (2023-05-03T17:55:25Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.