Synthetic Data (Almost) from Scratch: Generalized Instruction Tuning for
Language Models
- URL: http://arxiv.org/abs/2402.13064v1
- Date: Tue, 20 Feb 2024 15:00:35 GMT
- Title: Synthetic Data (Almost) from Scratch: Generalized Instruction Tuning for
Language Models
- Authors: Haoran Li, Qingxiu Dong, Zhengyang Tang, Chaojun Wang, Xingxing Zhang,
Haoyang Huang, Shaohan Huang, Xiaolong Huang, Zeqiang Huang, Dongdong Zhang,
Yuxian Gu, Xin Cheng, Xun Wang, Si-Qing Chen, Li Dong, Wei Lu, Zhifang Sui,
Benyou Wang, Wai Lam, Furu Wei
- Abstract summary: We introduce Generalized Instruction Tuning (called GLAN), a general and scalable method for instruction tuning of Large Language Models (LLMs)
GLAN exclusively utilizes a pre-curated taxonomy of human knowledge and capabilities as input and generates large-scale synthetic instruction data across all disciplines.
With the fine-grained key concepts detailed in every class session of the syllabus, we are able to generate diverse instructions with a broad coverage across the entire spectrum of human knowledge and skills.
- Score: 153.14575887549088
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We introduce Generalized Instruction Tuning (called GLAN), a general and
scalable method for instruction tuning of Large Language Models (LLMs). Unlike
prior work that relies on seed examples or existing datasets to construct
instruction tuning data, GLAN exclusively utilizes a pre-curated taxonomy of
human knowledge and capabilities as input and generates large-scale synthetic
instruction data across all disciplines. Specifically, inspired by the
systematic structure in human education system, we build the taxonomy by
decomposing human knowledge and capabilities to various fields, sub-fields and
ultimately, distinct disciplines semi-automatically, facilitated by LLMs.
Subsequently, we generate a comprehensive list of subjects for every discipline
and proceed to design a syllabus tailored to each subject, again utilizing
LLMs. With the fine-grained key concepts detailed in every class session of the
syllabus, we are able to generate diverse instructions with a broad coverage
across the entire spectrum of human knowledge and skills. Extensive experiments
on large language models (e.g., Mistral) demonstrate that GLAN excels in
multiple dimensions from mathematical reasoning, coding, academic exams,
logical reasoning to general instruction following without using task-specific
training data of these tasks. In addition, GLAN allows for easy customization
and new fields or skills can be added by simply incorporating a new node into
our taxonomy.
Related papers
- Layer by Layer: Uncovering Where Multi-Task Learning Happens in Instruction-Tuned Large Language Models [22.676688441884465]
Fine-tuning pre-trained large language models (LLMs) on a diverse array of tasks has become a common approach for building models.
This study investigates the task-specific information encoded in pre-trained LLMs and the effects of instruction tuning on their representations.
arXiv Detail & Related papers (2024-10-25T23:38:28Z) - Empowering Persian LLMs for Instruction Following: A Novel Dataset and Training Approach [0.0]
We introduce FarsInstruct, a dataset designed to enhance the instruction following ability of large language models.
FarsInstruct comprises 197 templates across 21 distinct datasets, and we intend to update it consistently, thus augmenting its applicability.
arXiv Detail & Related papers (2024-07-15T19:17:31Z) - Controllable Navigation Instruction Generation with Chain of Thought Prompting [74.34604350917273]
We propose C-Instructor, which utilizes the chain-of-thought-style prompt for style-controllable and content-controllable instruction generation.
C-Instructor renders generated instructions more accessible to follow and offers greater controllability over the manipulation of landmark objects.
arXiv Detail & Related papers (2024-07-10T07:37:20Z) - KnowCoder: Coding Structured Knowledge into LLMs for Universal Information Extraction [59.039355258637315]
We propose KnowCoder, a Large Language Model (LLM) to conduct Universal Information Extraction (UIE) via code generation.
KnowCoder introduces a code-style schema representation method to uniformly transform different schemas into Python classes.
KnowCoder contains a two-phase learning framework that enhances its schema understanding ability via code pretraining and its schema following ability via instruction tuning.
arXiv Detail & Related papers (2024-03-12T14:56:34Z) - Knowledge Plugins: Enhancing Large Language Models for Domain-Specific
Recommendations [50.81844184210381]
We propose a general paradigm that augments large language models with DOmain-specific KnowledgE to enhance their performance on practical applications, namely DOKE.
This paradigm relies on a domain knowledge extractor, working in three steps: 1) preparing effective knowledge for the task; 2) selecting the knowledge for each specific sample; and 3) expressing the knowledge in an LLM-understandable way.
arXiv Detail & Related papers (2023-11-16T07:09:38Z) - Specialist or Generalist? Instruction Tuning for Specific NLP Tasks [58.422495509760154]
We investigate whether incorporating broad-coverage generalist instruction tuning can contribute to building a specialist model.
Our experiments assess four target tasks with distinct coverage levels.
The effect is particularly pronounced when the amount of task-specific training data is limited.
arXiv Detail & Related papers (2023-10-23T19:46:48Z) - From Supervised to Generative: A Novel Paradigm for Tabular Deep Learning with Large Language Models [18.219485459836285]
Generative Tabular Learning (GTL) is a novel framework that integrates the advanced functionalities of large language models (LLMs)
Our empirical study spans 384 public datasets, rigorously analyzing GTL's scaling behaviors.
GTL-LLaMA-2 model demonstrates superior zero-shot and in-context learning capabilities across numerous classification and regression tasks.
arXiv Detail & Related papers (2023-10-11T09:37:38Z) - Towards Building the Federated GPT: Federated Instruction Tuning [66.7900343035733]
This paper introduces Federated Instruction Tuning (FedIT) as the learning framework for the instruction tuning of large language models (LLMs)
We demonstrate that by exploiting the heterogeneous and diverse sets of instructions on the client's end with FedIT, we improved the performance of LLMs compared to centralized training with only limited local instructions.
arXiv Detail & Related papers (2023-05-09T17:42:34Z) - Unified Text Structuralization with Instruction-tuned Language Models [28.869098023025753]
We propose a simple and efficient approach to instruct large language model (LLM) to extract a variety of structures from texts.
Experiments show that this approach can enable language models to perform comparable with other state-of-the-art methods on datasets of a variety of languages and knowledge.
arXiv Detail & Related papers (2023-03-27T07:39:05Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.