Turning Dust into Gold: Distilling Complex Reasoning Capabilities from
LLMs by Leveraging Negative Data
- URL: http://arxiv.org/abs/2312.12832v1
- Date: Wed, 20 Dec 2023 08:28:36 GMT
- Title: Turning Dust into Gold: Distilling Complex Reasoning Capabilities from
LLMs by Leveraging Negative Data
- Authors: Yiwei Li, Peiwen Yuan, Shaoxiong Feng, Boyuan Pan, Bin Sun, Xinglin
Wang, Heda Wang, Kan Li
- Abstract summary: Large Language Models (LLMs) have performed well on various reasoning tasks, but their inaccessibility and numerous parameters hinder wide application in practice.
We propose a model specialization framework to distill LLMs with negative samples besides positive ones.
We conduct extensive experiments across arithmetic reasoning tasks to demonstrate the role of negative data in distillation from LLM.
- Score: 15.088675135566646
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Large Language Models (LLMs) have performed well on various reasoning tasks,
but their inaccessibility and numerous parameters hinder wide application in
practice. One promising way is distilling the reasoning ability from LLMs to
small models by the generated chain-of-thought reasoning paths. In some cases,
however, LLMs may produce incorrect reasoning chains, especially when facing
complex mathematical problems. Previous studies only transfer knowledge from
positive samples and drop the synthesized data with wrong answers. In this
work, we illustrate the merit of negative data and propose a model
specialization framework to distill LLMs with negative samples besides positive
ones. The framework consists of three progressive steps, covering from training
to inference stages, to absorb knowledge from negative data. We conduct
extensive experiments across arithmetic reasoning tasks to demonstrate the role
of negative data in distillation from LLM.
Related papers
- Preference Leakage: A Contamination Problem in LLM-as-a-judge [69.96778498636071]
Large Language Models (LLMs) as judges and LLM-based data synthesis have emerged as two fundamental LLM-driven data annotation methods.
In this work, we expose preference leakage, a contamination problem in LLM-as-a-judge caused by the relatedness between the synthetic data generators and LLM-based evaluators.
arXiv Detail & Related papers (2025-02-03T17:13:03Z) - Enhancing Reasoning through Process Supervision with Monte Carlo Tree Search [2.1637240640145343]
Large language models (LLMs) have demonstrated their remarkable capacity across a variety of tasks.
To improve LLMs' reasoning ability, process supervision has proven to be better than outcome supervision.
In this work, we study using Monte Carlo Tree Search (MCTS) to generate process supervision data with LLMs themselves for training them.
arXiv Detail & Related papers (2025-01-02T12:09:17Z) - SyNeg: LLM-Driven Synthetic Hard-Negatives for Dense Retrieval [45.971786380884126]
The performance of Dense retrieval (DR) is significantly influenced by the quality of negative sampling.
Recent advancements in large language models (LLMs) offer an innovative solution by generating contextually rich and diverse negative samples.
In this work, we present a framework that harnesses LLMs to synthesize high-quality hard negative samples.
arXiv Detail & Related papers (2024-12-23T03:49:00Z) - What Makes In-context Learning Effective for Mathematical Reasoning: A Theoretical Analysis [81.15503859645149]
In this paper, we aim to theoretically analyze the impact of in-context demonstrations on large language models' reasoning performance.
We propose a straightforward, generalizable, and low-complexity demonstration selection method named LMS3.
arXiv Detail & Related papers (2024-12-11T11:38:11Z) - Improving Mathematical Reasoning Capabilities of Small Language Models via Feedback-Driven Distillation [15.542737858152053]
Large Language Models (LLMs) demonstrate exceptional reasoning capabilities, often achieving state-of-the-art performance in various tasks.
A promising solution is knowledge distillation, where LLMs transfer reasoning capabilities to Small Language Models (SLMs), enabling wider deployment on low-resource devices.
We propose a Feedback-Driven Distillation (FDD) framework to enhance SLMs' mathematical reasoning capabilities.
arXiv Detail & Related papers (2024-11-22T03:12:39Z) - Q*: Improving Multi-step Reasoning for LLMs with Deliberative Planning [53.6472920229013]
Large Language Models (LLMs) have demonstrated impressive capability in many natural language tasks.
LLMs are prone to produce errors, hallucinations and inconsistent statements when performing multi-step reasoning.
We introduce Q*, a framework for guiding LLMs decoding process with deliberative planning.
arXiv Detail & Related papers (2024-06-20T13:08:09Z) - C-ICL: Contrastive In-context Learning for Information Extraction [54.39470114243744]
c-ICL is a novel few-shot technique that leverages both correct and incorrect sample constructions to create in-context learning demonstrations.
Our experiments on various datasets indicate that c-ICL outperforms previous few-shot in-context learning methods.
arXiv Detail & Related papers (2024-02-17T11:28:08Z) - Task Contamination: Language Models May Not Be Few-Shot Anymore [9.696290050028237]
Large language models (LLMs) offer impressive performance in various zero-shot and few-shot tasks.
However, their success in zero-shot and few-shot settings may be affected by task contamination.
This paper investigates how zero-shot and few-shot performance of LLMs has changed chronologically over time.
arXiv Detail & Related papers (2023-12-26T21:17:46Z) - Zero-Shot Question Answering over Financial Documents using Large
Language Models [0.18749305679160366]
We introduce a large language model (LLM) based approach to answer complex questions requiring multi-hop numerical reasoning over financial reports.
We use novel zero-shot prompts that guide the LLM to encode the required reasoning into a Python program or a domain specific language.
arXiv Detail & Related papers (2023-11-19T16:23:34Z) - ReEval: Automatic Hallucination Evaluation for Retrieval-Augmented Large Language Models via Transferable Adversarial Attacks [91.55895047448249]
This paper presents ReEval, an LLM-based framework using prompt chaining to perturb the original evidence for generating new test cases.
We implement ReEval using ChatGPT and evaluate the resulting variants of two popular open-domain QA datasets.
Our generated data is human-readable and useful to trigger hallucination in large language models.
arXiv Detail & Related papers (2023-10-19T06:37:32Z) - Scaling Relationship on Learning Mathematical Reasoning with Large
Language Models [75.29595679428105]
We investigate how the pre-training loss, supervised data amount, and augmented data amount influence the reasoning performances of a supervised LLM.
We find that rejection samples from multiple models push LLaMA-7B to an accuracy of 49.3% on GSM8K which outperforms the supervised fine-tuning (SFT) accuracy of 35.9% significantly.
arXiv Detail & Related papers (2023-08-03T15:34:01Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.