Can Large Language Models Design Accurate Label Functions?
- URL: http://arxiv.org/abs/2311.00739v1
- Date: Wed, 1 Nov 2023 15:14:46 GMT
- Title: Can Large Language Models Design Accurate Label Functions?
- Authors: Naiqing Guan, Kaiwen Chen, Nick Koudas
- Abstract summary: Programmatic weak supervision methodologies facilitate the expedited labeling of extensive datasets through the use of label functions (LFs)
Recent advances in pre-trained language models (PLMs) have exhibited substantial potential across diverse tasks.
This research introduces DataSculpt, an interactive framework that harnesses PLMs for the automated generation of LFs.
- Score: 14.32722091664306
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Programmatic weak supervision methodologies facilitate the expedited labeling
of extensive datasets through the use of label functions (LFs) that encapsulate
heuristic data sources. Nonetheless, the creation of precise LFs necessitates
domain expertise and substantial endeavors. Recent advances in pre-trained
language models (PLMs) have exhibited substantial potential across diverse
tasks. However, the capacity of PLMs to autonomously formulate accurate LFs
remains an underexplored domain. In this research, we address this gap by
introducing DataSculpt, an interactive framework that harnesses PLMs for the
automated generation of LFs. Within DataSculpt, we incorporate an array of
prompting techniques, instance selection strategies, and LF filtration methods
to explore the expansive design landscape. Ultimately, we conduct a thorough
assessment of DataSculpt's performance on 12 real-world datasets, encompassing
a range of tasks. This evaluation unveils both the strengths and limitations of
contemporary PLMs in LF design.
Related papers
- Forewarned is Forearmed: Leveraging LLMs for Data Synthesis through Failure-Inducing Exploration [90.41908331897639]
Large language models (LLMs) have significantly benefited from training on diverse, high-quality task-specific data.
We present a novel approach, ReverseGen, designed to automatically generate effective training samples.
arXiv Detail & Related papers (2024-10-22T06:43:28Z) - SELF-GUIDE: Better Task-Specific Instruction Following via Self-Synthetic Finetuning [70.21358720599821]
Large language models (LLMs) hold the promise of solving diverse tasks when provided with appropriate natural language prompts.
We propose SELF-GUIDE, a multi-stage mechanism in which we synthesize task-specific input-output pairs from the student LLM.
We report an absolute improvement of approximately 15% for classification tasks and 18% for generation tasks in the benchmark's metrics.
arXiv Detail & Related papers (2024-07-16T04:41:58Z) - PISTOL: Dataset Compilation Pipeline for Structural Unlearning of LLMs [31.16117964915814]
Machine unlearning, which seeks to erase specific data stored in the pre-trained or fine-tuned models, has emerged as a crucial protective measure for LLMs.
To facilitate the development of structural unlearning methods, we propose PISTOL, a pipeline for compiling multi-scenario datasets.
We conduct benchmarks with four distinct unlearning methods on both Llama2-7B and Mistral-7B models.
arXiv Detail & Related papers (2024-06-24T17:22:36Z) - Can Long-Context Language Models Subsume Retrieval, RAG, SQL, and More? [54.667202878390526]
Long-context language models (LCLMs) have the potential to revolutionize our approach to tasks traditionally reliant on external tools like retrieval systems or databases.
We introduce LOFT, a benchmark of real-world tasks requiring context up to millions of tokens designed to evaluate LCLMs' performance on in-context retrieval and reasoning.
Our findings reveal LCLMs' surprising ability to rival state-of-the-art retrieval and RAG systems, despite never having been explicitly trained for these tasks.
arXiv Detail & Related papers (2024-06-19T00:28:58Z) - NIFTY Financial News Headlines Dataset [14.622656548420073]
The NIFTY Financial News Headlines dataset is designed to facilitate and advance research in financial market forecasting using large language models (LLMs)
This dataset comprises two distinct versions tailored for different modeling approaches: (i) NIFTY-LM, which targets supervised fine-tuning (SFT) of LLMs with an auto-regressive, causal language-modeling objective, and (ii) NIFTY-RL, formatted specifically for alignment methods (like reinforcement learning from human feedback) to align LLMs via rejection sampling and reward modeling.
arXiv Detail & Related papers (2024-05-16T01:09:33Z) - Large Language Models as Financial Data Annotators: A Study on Effectiveness and Efficiency [13.561104321425045]
Large Language Models (LLMs) have demonstrated remarkable performance in data annotation tasks on general domain datasets.
We investigate the potential of LLMs as efficient data annotators for extracting relations in financial documents.
We demonstrate that the current state-of-the-art LLMs can be sufficient alternatives to non-expert crowdworkers.
arXiv Detail & Related papers (2024-03-26T23:32:52Z) - FOFO: A Benchmark to Evaluate LLMs' Format-Following Capability [70.84333325049123]
FoFo is a pioneering benchmark for evaluating large language models' (LLMs) ability to follow complex, domain-specific formats.
This paper presents FoFo, a pioneering benchmark for evaluating large language models' (LLMs) ability to follow complex, domain-specific formats.
arXiv Detail & Related papers (2024-02-28T19:23:27Z) - Large Language Models for Data Annotation: A Survey [49.8318827245266]
The emergence of advanced Large Language Models (LLMs) presents an unprecedented opportunity to automate the complicated process of data annotation.
This survey includes an in-depth taxonomy of data types that LLMs can annotate, a review of learning strategies for models utilizing LLM-generated annotations, and a detailed discussion of the primary challenges and limitations associated with using LLMs for data annotation.
arXiv Detail & Related papers (2024-02-21T00:44:04Z) - Self-Play Fine-Tuning Converts Weak Language Models to Strong Language Models [52.98743860365194]
We propose a new fine-tuning method called Self-Play fIne-tuNing (SPIN)
At the heart of SPIN lies a self-play mechanism, where the LLM refines its capability by playing against instances of itself.
This sheds light on the promise of self-play, enabling the achievement of human-level performance in LLMs without the need for expert opponents.
arXiv Detail & Related papers (2024-01-02T18:53:13Z) - Large Language Models as Data Preprocessors [9.99065004972981]
Large Language Models (LLMs) have marked a significant advancement in artificial intelligence.
This study explores their potential in data preprocessing, a critical stage in data mining and analytics applications.
We propose an LLM-based framework for data preprocessing, which integrates cutting-edge prompt engineering techniques.
arXiv Detail & Related papers (2023-08-30T23:28:43Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.