LaFFi: Leveraging Hybrid Natural Language Feedback for Fine-tuning
Language Models
- URL: http://arxiv.org/abs/2401.00907v1
- Date: Sun, 31 Dec 2023 21:18:16 GMT
- Title: LaFFi: Leveraging Hybrid Natural Language Feedback for Fine-tuning
Language Models
- Authors: Qianxi Li, Yingyue Cao, Jikun Kang, Tianpei Yang, Xi Chen, Jun Jin and
Matthew E. Taylor
- Abstract summary: Fine-tuning Large Language Models (LLMs) adapts a trained model to specific downstream tasks.
Supervised Fine-Tuning (SFT) is a common approach, where an LLM is trained to produce desired answers.
This paper introduces an alternative to SFT called Natural Language Feedback for Finetuning LLMs (LaFFi)
- Score: 14.087415157225715
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Fine-tuning Large Language Models (LLMs) adapts a trained model to specific
downstream tasks, significantly improving task-specific performance. Supervised
Fine-Tuning (SFT) is a common approach, where an LLM is trained to produce
desired answers. However, LLMs trained with SFT sometimes make simple mistakes
and result in hallucinations on reasoning tasks such as question-answering.
Without external feedback, it is difficult for SFT to learn a good mapping
between the question and the desired answer, especially with a small dataset.
This paper introduces an alternative to SFT called Natural Language Feedback
for Finetuning LLMs (LaFFi). LaFFi has LLMs directly predict the feedback they
will receive from an annotator. We find that requiring such reflection can
significantly improve the accuracy in in-domain question-answering tasks,
providing a promising direction for the application of natural language
feedback in the realm of SFT LLMs. Additional ablation studies show that the
portion of human-annotated data in the annotated datasets affects the
fine-tuning performance.
Related papers
Err
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.