FineMath: A Fine-Grained Mathematical Evaluation Benchmark for Chinese
Large Language Models
- URL: http://arxiv.org/abs/2403.07747v1
- Date: Tue, 12 Mar 2024 15:32:39 GMT
- Title: FineMath: A Fine-Grained Mathematical Evaluation Benchmark for Chinese
Large Language Models
- Authors: Yan Liu, Renren Jin, Lin Shi, Zheng Yao, Deyi Xiong
- Abstract summary: FineMath is a fine-grained mathematical evaluation benchmark dataset for assessing Chinese Large Language Models (LLMs)
FineMath is created to cover the major key mathematical concepts taught in elementary school math, which are divided into 17 categories of math word problems.
All the 17 categories of math word problems are manually annotated with their difficulty levels according to the number of reasoning steps required to solve these problems.
- Score: 47.560637703675816
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: To thoroughly assess the mathematical reasoning abilities of Large Language
Models (LLMs), we need to carefully curate evaluation datasets covering diverse
mathematical concepts and mathematical problems at different difficulty levels.
In pursuit of this objective, we propose FineMath in this paper, a fine-grained
mathematical evaluation benchmark dataset for assessing Chinese LLMs. FineMath
is created to cover the major key mathematical concepts taught in elementary
school math, which are further divided into 17 categories of math word
problems, enabling in-depth analysis of mathematical reasoning abilities of
LLMs. All the 17 categories of math word problems are manually annotated with
their difficulty levels according to the number of reasoning steps required to
solve these problems. We conduct extensive experiments on a wide range of LLMs
on FineMath and find that there is still considerable room for improvements in
terms of mathematical reasoning capability of Chinese LLMs. We also carry out
an in-depth analysis on the evaluation process and methods that have been
overlooked previously. These two factors significantly influence the model
results and our understanding of their mathematical reasoning capabilities. The
dataset will be publicly available soon.
Related papers
- Is Your Model Really A Good Math Reasoner? Evaluating Mathematical Reasoning with Checklist [46.670206614087334]
We argue that if a model really understands a problem, it should be robustly applied across a diverse array of tasks.
MathCheck is a well-designed checklist for testing task generalization and reasoning.
MathCheck better reflects true mathematical abilities and represents mathematical intelligence more linearly.
arXiv Detail & Related papers (2024-07-11T17:58:58Z) - MathBench: Evaluating the Theory and Application Proficiency of LLMs with a Hierarchical Mathematics Benchmark [82.64129627675123]
MathBench is a new benchmark that rigorously assesses the mathematical capabilities of large language models.
MathBench spans a wide range of mathematical disciplines, offering a detailed evaluation of both theoretical understanding and practical problem-solving skills.
arXiv Detail & Related papers (2024-05-20T17:52:29Z) - Mathify: Evaluating Large Language Models on Mathematical Problem Solving Tasks [34.09857430966818]
We introduce an extensive mathematics dataset called "MathQuest" sourced from the 11th and 12th standard Mathematics NCERT textbooks.
We conduct fine-tuning experiments with three prominent large language models: LLaMA-2, WizardMath, and MAmmoTH.
Our experiments reveal that among the three models, MAmmoTH-13B emerges as the most proficient, achieving the highest level of competence in solving the presented mathematical problems.
arXiv Detail & Related papers (2024-04-19T08:45:42Z) - MathScale: Scaling Instruction Tuning for Mathematical Reasoning [70.89605383298331]
Large language models (LLMs) have demonstrated remarkable capabilities in problem-solving.
However, their proficiency in solving mathematical problems remains inadequate.
We propose MathScale, a simple and scalable method to create high-quality mathematical reasoning data.
arXiv Detail & Related papers (2024-03-05T11:42:59Z) - GSM-Plus: A Comprehensive Benchmark for Evaluating the Robustness of LLMs as Mathematical Problem Solvers [68.77382332826167]
Large language models (LLMs) have achieved impressive performance across various mathematical reasoning benchmarks.
One essential and frequently occurring evidence is that when the math questions are slightly changed, LLMs can behave incorrectly.
This motivates us to evaluate the robustness of LLMs' math reasoning capability by testing a wide range of question variations.
arXiv Detail & Related papers (2024-02-29T15:26:14Z) - ConceptMath: A Bilingual Concept-wise Benchmark for Measuring
Mathematical Reasoning of Large Language Models [67.32868432113587]
This paper introduces ConceptMath, a fine-grained benchmark that evaluates concept-wise mathematical reasoning of Large Language Models (LLMs)
Unlike traditional benchmarks that evaluate general mathematical reasoning with an average accuracy, ConceptMath systematically organizes math problems under a hierarchy of math concepts.
arXiv Detail & Related papers (2024-02-22T16:06:49Z) - JiuZhang: A Chinese Pre-trained Language Model for Mathematical Problem
Understanding [74.12405417718054]
This paper aims to advance the mathematical intelligence of machines by presenting the first Chinese mathematical pre-trained language model(PLM)
Unlike other standard NLP tasks, mathematical texts are difficult to understand, since they involve mathematical terminology, symbols and formulas in the problem statement.
We design a novel curriculum pre-training approach for improving the learning of mathematical PLMs, consisting of both basic and advanced courses.
arXiv Detail & Related papers (2022-06-13T17:03:52Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.