Tuna: Instruction Tuning using Feedback from Large Language Models
- URL: http://arxiv.org/abs/2310.13385v1
- Date: Fri, 20 Oct 2023 09:55:06 GMT
- Title: Tuna: Instruction Tuning using Feedback from Large Language Models
- Authors: Haoran Li, Yiran Liu, Xingxing Zhang, Wei Lu, Furu Wei
- Abstract summary: We propose finetuning an instruction-tuned large language model using our novel textitprobabilistic ranking and textitcontextual ranking approaches.
Probabilistic ranking enables the instruction-tuned model to inherit the relative rankings of high-quality and low-quality responses from the teacher LLM.
On the other hand, learning with contextual ranking allows the model to refine its own response distribution using the contextual understanding ability of stronger LLMs.
- Score: 74.04950416204551
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Instruction tuning of open-source large language models (LLMs) like LLaMA,
using direct outputs from more powerful LLMs such as Instruct-GPT and GPT-4,
has proven to be a cost-effective way to align model behaviors with human
preferences. However, the instruction-tuned model has only seen one response
per instruction, lacking the knowledge of potentially better responses. In this
paper, we propose finetuning an instruction-tuned LLM using our novel
\textit{probabilistic ranking} and \textit{contextual ranking} approaches to
increase the likelihood of generating better responses. Probabilistic ranking
enables the instruction-tuned model to inherit the relative rankings of
high-quality and low-quality responses from the teacher LLM. On the other hand,
learning with contextual ranking allows the model to refine its own response
distribution using the contextual understanding ability of stronger LLMs.
Furthermore, we apply probabilistic ranking and contextual ranking sequentially
to the instruction-tuned LLM. The resulting model, which we call \textbf{Tuna},
consistently improves the performance on Super Natural Instructions (119 test
tasks), LMentry (25 test tasks), Vicuna QA, and can even obtain better results
than several strong reinforcement learning baselines. Our code and data are
available at \url{ https://github.com/microsoft/LMOps}.
Related papers
- Show, Don't Tell: Aligning Language Models with Demonstrated Feedback [54.10302745921713]
Demonstration ITerated Task Optimization (DITTO) directly aligns language model outputs to a user's demonstrated behaviors.
We evaluate DITTO's ability to learn fine-grained style and task alignment across domains such as news articles, emails, and blog posts.
arXiv Detail & Related papers (2024-06-02T23:13:56Z) - Can Small Language Models be Good Reasoners for Sequential Recommendation? [34.098264212413305]
Step-by-step knowLedge dIstillation fraMework for recommendation (SLIM)
We introduce CoT prompting based on user behavior sequences for the larger teacher model.
The rationales generated by the teacher model are then utilized as labels to distill the downstream smaller student model.
arXiv Detail & Related papers (2024-03-07T06:49:37Z) - LiPO: Listwise Preference Optimization through Learning-to-Rank [62.02782819559389]
Policy can learn more effectively from a ranked list of plausible responses given the prompt.
We show that LiPO-$lambda$ can outperform DPO variants and SLiC by a clear margin on several preference alignment tasks.
arXiv Detail & Related papers (2024-02-02T20:08:10Z) - LLM-augmented Preference Learning from Natural Language [19.700169351688768]
Large Language Models (LLMs) are equipped to deal with larger context lengths.
LLMs can consistently outperform the SotA when the target text is large.
Few-shot learning yields better performance than zero-shot learning.
arXiv Detail & Related papers (2023-10-12T17:17:27Z) - Instruction Position Matters in Sequence Generation with Large Language
Models [67.87516654892343]
Large language models (LLMs) are capable of performing conditional sequence generation tasks, such as translation or summarization.
We propose enhancing the instruction-following capability of LLMs by shifting the position of task instructions after the input sentences.
arXiv Detail & Related papers (2023-08-23T12:36:57Z) - ReLLa: Retrieval-enhanced Large Language Models for Lifelong Sequential Behavior Comprehension in Recommendation [43.270424225285105]
We focus on adapting and empowering a pure large language model for zero-shot and few-shot recommendation tasks.
We propose Retrieval-enhanced Large Language models (ReLLa) for recommendation tasks in both zero-shot and few-shot settings.
arXiv Detail & Related papers (2023-08-22T02:25:04Z) - Evaluating Instruction-Tuned Large Language Models on Code Comprehension
and Generation [4.310519298899164]
In this work, we evaluate 10 open-source instructed LLMs on four representative code comprehension and generation tasks.
For the zero-shot setting, instructed LLMs are very competitive on code comprehension and generation tasks.
For the few-shot setting, we find that adding demonstration examples substantially helps instructed LLMs perform better.
arXiv Detail & Related papers (2023-08-02T15:54:22Z) - TIM: Teaching Large Language Models to Translate with Comparison [78.66926087162672]
We propose a novel framework using examples in comparison to teach LLMs to learn translation.
Our approach involves presenting the model with examples of correct and incorrect translations and using a preference loss to guide the model's learning.
Our findings offer a new perspective on fine-tuning LLMs for translation tasks and provide a promising solution for generating high-quality translations.
arXiv Detail & Related papers (2023-07-10T08:15:40Z) - Investigating the Effectiveness of Task-Agnostic Prefix Prompt for
Instruction Following [44.701091969256055]
We present our finding that prepending a Task-Agnostic Prefix Prompt (TAPP) to the input improves the instruction-following ability of various Large Language Models (LLMs) during inference.
We observe that both base LLMs (i.e. not fine-tuned to follow instructions) and instruction-tuned models benefit from TAPP, resulting in 34.58% and 12.26% improvement on average.
arXiv Detail & Related papers (2023-02-28T16:06:35Z) - Guiding Large Language Models via Directional Stimulus Prompting [114.84930073977672]
We introduce Directional Stimulus Prompting, a novel framework for guiding black-box large language models (LLMs) toward specific desired outputs.
Instead of directly adjusting LLMs, our method employs a small tunable policy model to generate an auxiliary directional stimulus prompt for each input instance.
arXiv Detail & Related papers (2023-02-22T17:44:15Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.