Fabricator: An Open Source Toolkit for Generating Labeled Training Data
with Teacher LLMs
- URL: http://arxiv.org/abs/2309.09582v2
- Date: Fri, 2 Feb 2024 22:53:30 GMT
- Title: Fabricator: An Open Source Toolkit for Generating Labeled Training Data
with Teacher LLMs
- Authors: Jonas Golde, Patrick Haller, Felix Hamborg, Julian Risch, Alan Akbik
- Abstract summary: We show how to generate labeled data that can be used to train a downstream NLP model.
We introduce Fabricator, an open-source Python toolkit for NLP generation.
- Score: 6.847114270274019
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Most NLP tasks are modeled as supervised learning and thus require labeled
training data to train effective models. However, manually producing such data
at sufficient quality and quantity is known to be costly and time-intensive.
Current research addresses this bottleneck by exploring a novel paradigm called
zero-shot learning via dataset generation. Here, a powerful LLM is prompted
with a task description to generate labeled data that can be used to train a
downstream NLP model. For instance, an LLM might be prompted to "generate 500
movie reviews with positive overall sentiment, and another 500 with negative
sentiment." The generated data could then be used to train a binary sentiment
classifier, effectively leveraging an LLM as a teacher to a smaller student
model. With this demo, we introduce Fabricator, an open-source Python toolkit
for dataset generation. Fabricator implements common dataset generation
workflows, supports a wide range of downstream NLP tasks (such as text
classification, question answering, and entity recognition), and is integrated
with well-known libraries to facilitate quick experimentation. With Fabricator,
we aim to support researchers in conducting reproducible dataset generation
experiments using LLMs and help practitioners apply this approach to train
models for downstream tasks.
Related papers
- Forewarned is Forearmed: Leveraging LLMs for Data Synthesis through Failure-Inducing Exploration [90.41908331897639]
Large language models (LLMs) have significantly benefited from training on diverse, high-quality task-specific data.
We present a novel approach, ReverseGen, designed to automatically generate effective training samples.
arXiv Detail & Related papers (2024-10-22T06:43:28Z) - SELF-GUIDE: Better Task-Specific Instruction Following via Self-Synthetic Finetuning [70.21358720599821]
Large language models (LLMs) hold the promise of solving diverse tasks when provided with appropriate natural language prompts.
We propose SELF-GUIDE, a multi-stage mechanism in which we synthesize task-specific input-output pairs from the student LLM.
We report an absolute improvement of approximately 15% for classification tasks and 18% for generation tasks in the benchmark's metrics.
arXiv Detail & Related papers (2024-07-16T04:41:58Z) - Zero-shot LLM-guided Counterfactual Generation for Text [15.254775341371364]
We propose a structured way to utilize large language models (LLMs) as general purpose counterfactual example generators.
We demonstrate the efficacy of LLMs as zero-shot counterfactual generators in evaluating and explaining black-box NLP models.
arXiv Detail & Related papers (2024-05-08T03:57:45Z) - DataDreamer: A Tool for Synthetic Data Generation and Reproducible LLM Workflows [72.40917624485822]
We introduce DataDreamer, an open source Python library that allows researchers to implement powerful large language models.
DataDreamer also helps researchers adhere to best practices that we propose to encourage open science.
arXiv Detail & Related papers (2024-02-16T00:10:26Z) - LLMaAA: Making Large Language Models as Active Annotators [32.57011151031332]
We propose LLMaAA, which takes large language models as annotators and puts them into an active learning loop to determine what to annotate efficiently.
We conduct experiments and analysis on two classic NLP tasks, named entity recognition and relation extraction.
With LLMaAA, task-specific models trained from LLM-generated labels can outperform the teacher within only hundreds of annotated examples.
arXiv Detail & Related papers (2023-10-30T14:54:15Z) - MLLM-DataEngine: An Iterative Refinement Approach for MLLM [62.30753425449056]
We propose a novel closed-loop system that bridges data generation, model training, and evaluation.
Within each loop, the MLLM-DataEngine first analyze the weakness of the model based on the evaluation results.
For targeting, we propose an Adaptive Bad-case Sampling module, which adjusts the ratio of different types of data.
For quality, we resort to GPT-4 to generate high-quality data with each given data type.
arXiv Detail & Related papers (2023-08-25T01:41:04Z) - From Quantity to Quality: Boosting LLM Performance with Self-Guided Data Selection for Instruction Tuning [52.257422715393574]
We introduce a self-guided methodology for Large Language Models (LLMs) to autonomously discern and select cherry samples from open-source datasets.
Our key innovation, the Instruction-Following Difficulty (IFD) metric, emerges as a pivotal metric to identify discrepancies between a model's expected responses and its intrinsic generation capability.
arXiv Detail & Related papers (2023-08-23T09:45:29Z) - Language models are weak learners [71.33837923104808]
We show that prompt-based large language models can operate effectively as weak learners.
We incorporate these models into a boosting approach, which can leverage the knowledge within the model to outperform traditional tree-based boosting.
Results illustrate the potential for prompt-based LLMs to function not just as few-shot learners themselves, but as components of larger machine learning pipelines.
arXiv Detail & Related papers (2023-06-25T02:39:19Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.