ConceptMath: A Bilingual Concept-wise Benchmark for Measuring
Mathematical Reasoning of Large Language Models
- URL: http://arxiv.org/abs/2402.14660v2
- Date: Fri, 23 Feb 2024 07:13:00 GMT
- Title: ConceptMath: A Bilingual Concept-wise Benchmark for Measuring
Mathematical Reasoning of Large Language Models
- Authors: Yanan Wu, Jie Liu, Xingyuan Bu, Jiaheng Liu, Zhanhui Zhou, Yuanxing
Zhang, Chenchen Zhang, Zhiqi Bai, Haibin Chen, Tiezheng Ge, Wanli Ouyang,
Wenbo Su, Bo Zheng
- Abstract summary: This paper introduces ConceptMath, a fine-grained benchmark that evaluates concept-wise mathematical reasoning of Large Language Models (LLMs)
Unlike traditional benchmarks that evaluate general mathematical reasoning with an average accuracy, ConceptMath systematically organizes math problems under a hierarchy of math concepts.
- Score: 67.32868432113587
- License: http://creativecommons.org/publicdomain/zero/1.0/
- Abstract: This paper introduces ConceptMath, a bilingual (English and Chinese),
fine-grained benchmark that evaluates concept-wise mathematical reasoning of
Large Language Models (LLMs). Unlike traditional benchmarks that evaluate
general mathematical reasoning with an average accuracy, ConceptMath
systematically organizes math problems under a hierarchy of math concepts, so
that mathematical reasoning can be evaluated at different granularity with
concept-wise accuracies. Based on our ConcepthMath, we evaluate a broad range
of LLMs, and we observe existing LLMs, though achieving high average accuracies
on traditional benchmarks, exhibit significant performance variations across
different math concepts and may even fail catastrophically on the most basic
ones. Besides, we also introduce an efficient fine-tuning strategy to enhance
the weaknesses of existing LLMs. Finally, we hope ConceptMath could guide the
developers to understand the fine-grained mathematical abilities of their
models and facilitate the growth of foundation models.
Related papers
- MathHay: An Automated Benchmark for Long-Context Mathematical Reasoning in LLMs [61.74749961334557]
MathHay is an automated benchmark designed to assess the long-context mathematical reasoning capabilities of LLMs.
We conduct extensive experiments on MathHay to assess the long-context mathematical reasoning abilities of eight top-performing models.
arXiv Detail & Related papers (2024-10-07T02:30:07Z) - MathScape: Evaluating MLLMs in multimodal Math Scenarios through a Hierarchical Benchmark [29.9945601202065]
We propose MathScape, a new benchmark that emphasizes the understanding and application of combined visual and textual information.
MathScape is designed to evaluate photo-based math problem scenarios, assessing the theoretical understanding and application ability of MLLMs.
We conduct a multi-dimensional evaluation on 11 advanced MLLMs, revealing that our benchmark is challenging even for the most sophisticated models.
arXiv Detail & Related papers (2024-08-14T13:23:43Z) - MathLearner: A Large Language Model Agent Framework for Learning to Solve Mathematical Problems [0.936726079405677]
We propose an agent framework for learning to solve mathematical problems based on inductive reasoning.
By emulating the human learning process of generalization of learned information, this framework has great performance in the mathematical reasoning process.
Our model can be used as a personalised learning aid, thus reducing the inequality of educational resources.
arXiv Detail & Related papers (2024-08-03T13:28:19Z) - Is Your Model Really A Good Math Reasoner? Evaluating Mathematical Reasoning with Checklist [46.670206614087334]
We argue that if a model really understands a problem, it should be robustly applied across a diverse array of tasks.
MathCheck is a well-designed checklist for testing task generalization and reasoning.
MathCheck better reflects true mathematical abilities and represents mathematical intelligence more linearly.
arXiv Detail & Related papers (2024-07-11T17:58:58Z) - MathBench: Evaluating the Theory and Application Proficiency of LLMs with a Hierarchical Mathematics Benchmark [82.64129627675123]
MathBench is a new benchmark that rigorously assesses the mathematical capabilities of large language models.
MathBench spans a wide range of mathematical disciplines, offering a detailed evaluation of both theoretical understanding and practical problem-solving skills.
arXiv Detail & Related papers (2024-05-20T17:52:29Z) - FineMath: A Fine-Grained Mathematical Evaluation Benchmark for Chinese Large Language Models [44.63505885248145]
FineMath is a fine-grained mathematical evaluation benchmark dataset for assessing Chinese Large Language Models (LLMs)
FineMath is created to cover the major key mathematical concepts taught in elementary school math, which are divided into 17 categories of math word problems.
All the 17 categories of math word problems are manually annotated with their difficulty levels according to the number of reasoning steps required to solve these problems.
arXiv Detail & Related papers (2024-03-12T15:32:39Z) - GSM-Plus: A Comprehensive Benchmark for Evaluating the Robustness of LLMs as Mathematical Problem Solvers [68.77382332826167]
Large language models (LLMs) have achieved impressive performance across various mathematical reasoning benchmarks.
One essential and frequently occurring evidence is that when the math questions are slightly changed, LLMs can behave incorrectly.
This motivates us to evaluate the robustness of LLMs' math reasoning capability by testing a wide range of question variations.
arXiv Detail & Related papers (2024-02-29T15:26:14Z) - JiuZhang: A Chinese Pre-trained Language Model for Mathematical Problem
Understanding [74.12405417718054]
This paper aims to advance the mathematical intelligence of machines by presenting the first Chinese mathematical pre-trained language model(PLM)
Unlike other standard NLP tasks, mathematical texts are difficult to understand, since they involve mathematical terminology, symbols and formulas in the problem statement.
We design a novel curriculum pre-training approach for improving the learning of mathematical PLMs, consisting of both basic and advanced courses.
arXiv Detail & Related papers (2022-06-13T17:03:52Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.