EULER: Enhancing the Reasoning Ability of Large Language Models through Error-Induced Learning
- URL: http://arxiv.org/abs/2505.22131v1
- Date: Wed, 28 May 2025 08:57:03 GMT
- Title: EULER: Enhancing the Reasoning Ability of Large Language Models through Error-Induced Learning
- Authors: Zhuoyang Wu, Xinze Li, Zhenghao Liu, Yukun Yan, Zhiyuan Liu, Minghe Yu, Cheng Yang, Yu Gu, Ge Yu, Maosong Sun,
- Abstract summary: Large Language Models (LLMs) have demonstrated strong reasoning capabilities.<n>Error-IndUced LEaRning (EULER) model aims to develop an error exposure model that generates high-quality solution errors.
- Score: 66.82956219777763
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: Large Language Models (LLMs) have demonstrated strong reasoning capabilities and achieved promising results in mathematical problem-solving tasks. Learning from errors offers the potential to further enhance the performance of LLMs during Supervised Fine-Tuning (SFT). However, the errors in synthesized solutions are typically gathered from sampling trails, making it challenging to generate solution errors for each mathematical problem. This paper introduces the Error-IndUced LEaRning (EULER) model, which aims to develop an error exposure model that generates high-quality solution errors to enhance the mathematical reasoning capabilities of LLMs. Specifically, EULER optimizes the error exposure model to increase the generation probability of self-made solution errors while utilizing solutions produced by a superior LLM to regularize the generation quality. Our experiments across various mathematical problem datasets demonstrate the effectiveness of the EULER model, achieving an improvement of over 4% compared to all baseline models. Further analysis reveals that EULER is capable of synthesizing more challenging and educational solution errors, which facilitate both the training and inference processes of LLMs. All codes are available at https://github.com/NEUIR/EULER.
Related papers
- Error Detection and Correction for Interpretable Mathematics in Large Language Models [5.258949636570995]
EDCIM (Error Detection and Correction for Interpretable Mathematics) is a method for detecting and correcting these errors in interpretable mathematics tasks.<n>It integrates lightweight, open-source LLMs with more powerful proprietary models, balancing cost and accuracy.<n> Experimental results show that EDCIM significantly reduces both computational and financial costs, while maintaining, and even improving, prediction accuracy.
arXiv Detail & Related papers (2025-08-05T14:30:35Z) - WarriorMath: Enhancing the Mathematical Ability of Large Language Models with a Defect-aware Framework [42.74246647841103]
WarriorMath is a defect-aware framework for mathematical problem solving.<n>We employ multiple expert LLMs in a collaborative process to generate, critique, and refine problems.<n>In the training stage, we introduce a progressive learning framework that iteratively fine-tunes the model using increasingly challenging data tailored to its weaknesses.
arXiv Detail & Related papers (2025-08-02T07:45:12Z) - LEMMA: Learning from Errors for MatheMatical Advancement in LLMs [33.571479131705075]
We introduce Learning from Errors for Mathematical Advancement (LEMMA) to enhance large language models' reasoning ability.<n> LEMMA constructs data consisting of an incorrect solution with an erroneous step and a reflection connection to a correct solution for fine-tuning.<n> Experimental results demonstrate that LEMMA achieves significant performance improvements over other strong baselines.
arXiv Detail & Related papers (2025-03-21T17:59:10Z) - Error Classification of Large Language Models on Math Word Problems: A Dynamically Adaptive Framework [64.83955753606443]
Math Word Problems serve as a crucial benchmark for evaluating Large Language Models' reasoning abilities.<n>Current error classification methods rely on static and predefined categories.<n>We introduce MWPES-300K, a comprehensive dataset containing 304,865 error samples.
arXiv Detail & Related papers (2025-01-26T16:17:57Z) - Multi-Objective Large Language Model Unlearning [3.372396620898397]
Gradient Ascent (GA) is a proactive way to decrease the prediction probability of the model on the target data.<n>We propose Multi-Objective Large Language Model Unlearning (MOLLM) algorithm to overcome gradient explosion and catastrophic forgetting.<n>Our empirical results verify that MoLLM outperforms the SOTA GA-based LLM unlearning methods in terms of unlearning effect and model utility preservation.
arXiv Detail & Related papers (2024-12-29T09:35:56Z) - Subtle Errors in Reasoning: Preference Learning via Error-injected Self-editing [59.405145971637204]
We propose a novel preference learning framework called eRror-Injected Self-Editing (RISE)<n>RISE injects predefined subtle errors into pivotal tokens in reasoning or steps to construct hard pairs for error mitigation.<n>Experiments validate the effectiveness of RISE, with preference learning on Qwen2-7B-Instruct yielding notable improvements of 3.0% on GSM8K and 7.9% on MATH with only 4.5K training samples.
arXiv Detail & Related papers (2024-10-09T07:43:38Z) - S^3cMath: Spontaneous Step-level Self-correction Makes Large Language Models Better Mathematical Reasoners [23.713779973116733]
Self-correction is a method that can stimulate the potential reasoning abilities of large language models (LLMs)<n>We propose S$3$c-Math, which are able to perform Spontaneous Step-level Self-correction for Mathematical reasoning.
arXiv Detail & Related papers (2024-09-03T01:40:21Z) - CoMMIT: Coordinated Instruction Tuning for Multimodal Large Language Models [68.64605538559312]
In this paper, we analyze the MLLM instruction tuning from both theoretical and empirical perspectives.
Inspired by our findings, we propose a measurement to quantitatively evaluate the learning balance.
In addition, we introduce an auxiliary loss regularization method to promote updating of the generation distribution of MLLMs.
arXiv Detail & Related papers (2024-07-29T23:18:55Z) - LLMs-as-Instructors: Learning from Errors Toward Automating Model Improvement [93.38736019287224]
"LLMs-as-Instructors" framework autonomously enhances the training of smaller target models.
Inspired by the theory of "Learning from Errors", this framework employs an instructor LLM to meticulously analyze the specific errors within a target model.
Within this framework, we implement two strategies: "Learning from Error," which focuses solely on incorrect responses to tailor training data, and "Learning from Error by Contrast", which uses contrastive learning to analyze both correct and incorrect responses for a deeper understanding of errors.
arXiv Detail & Related papers (2024-06-29T17:16:04Z) - Learning From Mistakes Makes LLM Better Reasoner [106.48571828587728]
Large language models (LLMs) recently exhibited remarkable reasoning capabilities on solving math problems.
This work explores whether LLMs can LEarn from MistAkes (LEMA), akin to the human learning process.
arXiv Detail & Related papers (2023-10-31T17:52:22Z) - Pareto Optimal Learning for Estimating Large Language Model Errors [12.21899680905672]
Large Language Models (LLMs) have shown impressive abilities in many applications.
We present a method that generates a risk score to estimate the probability of error in an LLM response by integrating multiple sources of information.
arXiv Detail & Related papers (2023-06-28T21:11:15Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.