Modeling Complex Mathematical Reasoning via Large Language Model based
MathAgent
- URL: http://arxiv.org/abs/2312.08926v2
- Date: Sun, 17 Dec 2023 03:34:36 GMT
- Title: Modeling Complex Mathematical Reasoning via Large Language Model based
MathAgent
- Authors: Haoran Liao, Qinyi Du, Shaohua Hu, Hao He, Yanyan Xu, Jidong Tian,
Yaohui Jin
- Abstract summary: Large language models (LLMs) face challenges in solving complex mathematical problems.
We propose a formal description of the mathematical solving and extend LLMs with an agent-based zero-shot framework.
Experiments on miniF2F and MATH have demonstrated the effectiveness of PRER and proposed MathAgents.
- Score: 15.81048994298046
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Large language models (LLMs) face challenges in solving complex mathematical
problems that require comprehensive capacities to parse the statements,
associate domain knowledge, perform compound logical reasoning, and integrate
the intermediate rationales. Tackling all these problems once could be arduous
for LLMs, thus leading to confusion in generation. In this work, we explore the
potential of enhancing LLMs with agents by meticulous decomposition and
modeling of mathematical reasoning process. Specifically, we propose a formal
description of the mathematical solving and extend LLMs with an agent-based
zero-shot framework named
$\bf{P}$lanner-$\bf{R}$easoner-$\bf{E}$xecutor-$\bf{R}$eflector (PRER). We
further provide and implement two MathAgents that define the logical forms and
inherent relations via a pool of actions in different grains and orientations:
MathAgent-M adapts its actions to LLMs, while MathAgent-H aligns with
humankind. Experiments on miniF2F and MATH have demonstrated the effectiveness
of PRER and proposed MathAgents, achieving an increase of
$12.3\%$($53.9\%\xrightarrow{}66.2\%$) on the MiniF2F, $9.2\%$
($49.8\%\xrightarrow{}59.0\%$) on MATH, and
$13.2\%$($23.2\%\xrightarrow{}35.4\%$) for level-5 problems of MATH against
GPT-4. Further analytical results provide more insightful perspectives on
exploiting the behaviors of LLMs as agents.
Related papers
- FLARE: Faithful Logic-Aided Reasoning and Exploration [50.9814063216852]
We introduce a novel approach for traversing the problem space using task decompositions.
We use the Large Language Models to plan a solution, soft-formalise the query into facts and predicates using a logic programming code.
Our method allows us to compute the faithfulness of the reasoning process w.r.t. the generated code and analyse the steps of the multi-hop search without relying on external solvers.
arXiv Detail & Related papers (2024-10-14T19:39:11Z) - HARDMath: A Benchmark Dataset for Challenging Problems in Applied Mathematics [1.5716764919736026]
We introduce HARDMath, a dataset featuring challenging applied mathematics problems that require analytical approximation techniques.
Our framework auto-generates a large number of problems with solutions validated against numerical ground truths.
We evaluate both open- and closed-source LLMs on HARDMath-mini, a sub-sampled test set of 366 problems, as well as on 40 word problems formulated in applied science contexts.
arXiv Detail & Related papers (2024-10-13T20:09:41Z) - AI-Assisted Generation of Difficult Math Questions [78.7547836422727]
Current training positions mathematical reasoning as a core capability.
There is unmet demand for diverse and challenging math questions.
We present a design framework that combines the strengths of LLMs with a human-in-the-loop approach.
arXiv Detail & Related papers (2024-07-30T17:55:36Z) - MindStar: Enhancing Math Reasoning in Pre-trained LLMs at Inference Time [51.5039731721706]
MindStar is a purely inference-based searching method for large language models.
It formulates reasoning tasks as searching problems and proposes two search ideas to identify the optimal reasoning paths.
It significantly enhances the reasoning abilities of open-source models, such as Llama-2-13B and Mistral-7B, and achieves comparable performance to GPT-3.5 and Grok-1.
arXiv Detail & Related papers (2024-05-25T15:07:33Z) - MACM: Utilizing a Multi-Agent System for Condition Mining in Solving Complex Mathematical Problems [10.517708404982624]
This paper introduces the textitMulti-Agent System for conditional Mining (textbfMACM) prompting method.
It resolves intricate mathematical problems and demonstrates strong generalization capabilities across various mathematical contexts.
With the assistance of MACM, the accuracy of GPT-4 Turbo on the most challenging level five mathematical problems in the MATH dataset increase from $mathbf54.68% text to mathbf76.73%$.
arXiv Detail & Related papers (2024-04-06T21:39:01Z) - Can LLMs Master Math? Investigating Large Language Models on Math Stack Exchange [25.419977967846144]
Large Language Models (LLMs) have demonstrated exceptional capabilities in various natural language tasks.
This paper explores the current limitations of LLMs in navigating complex mathematical problem-solving.
arXiv Detail & Related papers (2024-03-30T12:48:31Z) - Can Large Language Models Play Games? A Case Study of A Self-Play
Approach [61.15761840203145]
Large Language Models (LLMs) harness extensive data from the Internet, storing a broad spectrum of prior knowledge.
Monte-Carlo Tree Search (MCTS) is a search algorithm that provides reliable decision-making solutions.
This work introduces an innovative approach that bolsters LLMs with MCTS self-play to efficiently resolve turn-based zero-sum games.
arXiv Detail & Related papers (2024-03-08T19:16:29Z) - InternLM-Math: Open Math Large Language Models Toward Verifiable Reasoning [98.53491178426492]
We open-source our math reasoning LLMs InternLM-Math which is continue pre-trained from InternLM2.
We unify chain-of-thought reasoning, reward modeling, formal reasoning, data augmentation, and code interpreter in a unified seq2seq format.
Our pre-trained model achieves 30.3 on the MiniF2F test set without fine-tuning.
arXiv Detail & Related papers (2024-02-09T11:22:08Z) - Evaluating Language Models for Mathematics through Interactions [116.67206980096513]
We introduce CheckMate, a prototype platform for humans to interact with and evaluate large language models (LLMs)
We conduct a study with CheckMate to evaluate three language models (InstructGPT, ChatGPT, and GPT-4) as assistants in proving undergraduate-level mathematics.
We derive a taxonomy of human behaviours and uncover that despite a generally positive correlation, there are notable instances of divergence between correctness and perceived helpfulness.
arXiv Detail & Related papers (2023-06-02T17:12:25Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.