Multi-Task Instruction Tuning of LLaMa for Specific Scenarios: A
Preliminary Study on Writing Assistance
- URL: http://arxiv.org/abs/2305.13225v2
- Date: Mon, 9 Oct 2023 07:55:26 GMT
- Title: Multi-Task Instruction Tuning of LLaMa for Specific Scenarios: A
Preliminary Study on Writing Assistance
- Authors: Yue Zhang and Leyang Cui and Deng Cai and Xinting Huang and Tao Fang
and Wei Bi
- Abstract summary: Small foundational models can display remarkable proficiency in tackling diverse tasks when fine-tuned using instruction-driven data.
In this work, we investigate a practical problem setting where the primary focus is on one or a few particular tasks rather than general-purpose instruction following.
Experimental results show that fine-tuning LLaMA on writing instruction data significantly improves its ability on writing tasks.
- Score: 60.40541387785977
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: Proprietary Large Language Models (LLMs), such as ChatGPT, have garnered
significant attention due to their exceptional capabilities in handling a
diverse range of tasks. Recent studies demonstrate that open-sourced smaller
foundational models, such as 7B-size LLaMA, can also display remarkable
proficiency in tackling diverse tasks when fine-tuned using instruction-driven
data. In this work, we investigate a practical problem setting where the
primary focus is on one or a few particular tasks rather than general-purpose
instruction following, and explore whether LLMs can be beneficial and further
improved for such targeted scenarios. We choose the writing-assistant scenario
as the testbed, which includes seven writing tasks. We collect training data
for these tasks, reframe them in an instruction-following format, and
subsequently refine the LLM, specifically LLaMA, via instruction tuning.
Experimental results show that fine-tuning LLaMA on writing instruction data
significantly improves its ability on writing tasks. We also conduct more
experiments and analyses to offer insights for future work on effectively
fine-tuning LLaMA for specific scenarios. Finally, we initiate a discussion
regarding the necessity of employing LLMs for only one targeted task, taking
into account the efforts required for tuning and the resources consumed during
deployment.
Related papers
- Layer by Layer: Uncovering Where Multi-Task Learning Happens in Instruction-Tuned Large Language Models [22.676688441884465]
Fine-tuning pre-trained large language models (LLMs) on a diverse array of tasks has become a common approach for building models.
This study investigates the task-specific information encoded in pre-trained LLMs and the effects of instruction tuning on their representations.
arXiv Detail & Related papers (2024-10-25T23:38:28Z) - SELF-GUIDE: Better Task-Specific Instruction Following via Self-Synthetic Finetuning [70.21358720599821]
Large language models (LLMs) hold the promise of solving diverse tasks when provided with appropriate natural language prompts.
We propose SELF-GUIDE, a multi-stage mechanism in which we synthesize task-specific input-output pairs from the student LLM.
We report an absolute improvement of approximately 15% for classification tasks and 18% for generation tasks in the benchmark's metrics.
arXiv Detail & Related papers (2024-07-16T04:41:58Z) - Distilling Instruction-following Abilities of Large Language Models with Task-aware Curriculum Planning [12.651588927599441]
Instruction tuning aims to align large language models with open-domain instructions and human-preferred responses.
We introduce Task-Aware Curriculum Planning for Instruction Refinement (TAPIR) to select instructions that are difficult for a student LLM to follow.
To balance the student's capabilities, task distributions in training sets are adjusted with responses automatically refined according to their corresponding tasks.
arXiv Detail & Related papers (2024-05-22T08:38:26Z) - Benchmarking LLMs on the Semantic Overlap Summarization Task [9.656095701778975]
This paper comprehensively evaluates Large Language Models (LLMs) on the Semantic Overlap Summarization (SOS) task.
We report well-established metrics like ROUGE, BERTscore, and SEM-F1$ on two different datasets of alternative narratives.
arXiv Detail & Related papers (2024-02-26T20:33:50Z) - Small LLMs Are Weak Tool Learners: A Multi-LLM Agent [73.54562551341454]
Large Language Model (LLM) agents significantly extend the capabilities of standalone LLMs.
We propose a novel approach that decomposes the aforementioned capabilities into a planner, caller, and summarizer.
This modular framework facilitates individual updates and the potential use of smaller LLMs for building each capability.
arXiv Detail & Related papers (2024-01-14T16:17:07Z) - When does In-context Learning Fall Short and Why? A Study on
Specification-Heavy Tasks [54.71034943526973]
In-context learning (ICL) has become the default method for using large language models (LLMs)
We find that ICL falls short of handling specification-heavy tasks, which are tasks with complicated and extensive task specifications.
We identify three primary reasons: inability to specifically understand context, misalignment in task schema comprehension with humans, and inadequate long-text understanding ability.
arXiv Detail & Related papers (2023-11-15T14:26:30Z) - TRACE: A Comprehensive Benchmark for Continual Learning in Large
Language Models [52.734140807634624]
Aligned large language models (LLMs) demonstrate exceptional capabilities in task-solving, following instructions, and ensuring safety.
Existing continual learning benchmarks lack sufficient challenge for leading aligned LLMs.
We introduce TRACE, a novel benchmark designed to evaluate continual learning in LLMs.
arXiv Detail & Related papers (2023-10-10T16:38:49Z) - Harnessing the Power of LLMs in Practice: A Survey on ChatGPT and Beyond [48.70557995528463]
This guide aims to provide researchers and practitioners with valuable insights and best practices for working with Large Language Models.
We present various use cases and non-use cases to illustrate the practical applications and limitations of LLMs in real-world scenarios.
arXiv Detail & Related papers (2023-04-26T17:52:30Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.