Explaining Patterns in Data with Language Models via Interpretable
Autoprompting
- URL: http://arxiv.org/abs/2210.01848v1
- Date: Tue, 4 Oct 2022 18:32:14 GMT
- Title: Explaining Patterns in Data with Language Models via Interpretable
Autoprompting
- Authors: Chandan Singh, John X. Morris, Jyoti Aneja, Alexander M. Rush,
Jianfeng Gao
- Abstract summary: We introduce interpretable autoprompting (iPrompt), an algorithm that generates a natural-language string explaining the data.
iPrompt can yield meaningful insights by accurately finding groundtruth dataset descriptions.
Experiments with an fMRI dataset show the potential for iPrompt to aid in scientific discovery.
- Score: 143.4162028260874
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Large language models (LLMs) have displayed an impressive ability to harness
natural language to perform complex tasks. In this work, we explore whether we
can leverage this learned ability to find and explain patterns in data.
Specifically, given a pre-trained LLM and data examples, we introduce
interpretable autoprompting (iPrompt), an algorithm that generates a
natural-language string explaining the data. iPrompt iteratively alternates
between generating explanations with an LLM and reranking them based on their
performance when used as a prompt. Experiments on a wide range of datasets,
from synthetic mathematics to natural-language understanding, show that iPrompt
can yield meaningful insights by accurately finding groundtruth dataset
descriptions. Moreover, the prompts produced by iPrompt are simultaneously
human-interpretable and highly effective for generalization: on real-world
sentiment classification datasets, iPrompt produces prompts that match or even
improve upon human-written prompts for GPT-3. Finally, experiments with an fMRI
dataset show the potential for iPrompt to aid in scientific discovery. All code
for using the methods and data here is made available on Github.
Related papers
- Evaluating LLM Prompts for Data Augmentation in Multi-label Classification of Ecological Texts [1.565361244756411]
Large language models (LLMs) play a crucial role in natural language processing (NLP) tasks.
This study applied prompt-based data augmentation to detect mentions of green practices in Russian social media.
arXiv Detail & Related papers (2024-11-22T12:37:41Z) - Ontology Population using LLMs [0.9894420655516563]
Knowledge graphs (KGs) are increasingly utilized for data integration, representation, and visualization.
LLMs offer promising capabilities for such tasks, excelling in natural language understanding and content generation.
This study investigates LLM effectiveness for the KG population, focusing on the Enslaved.org Hub Ontology.
arXiv Detail & Related papers (2024-11-03T15:39:20Z) - INTERS: Unlocking the Power of Large Language Models in Search with Instruction Tuning [59.07490387145391]
Large language models (LLMs) have demonstrated impressive capabilities in various natural language processing tasks.
Their application to information retrieval (IR) tasks is still challenging due to the infrequent occurrence of many IR-specific concepts in natural language.
We introduce a novel instruction tuning dataset, INTERS, encompassing 20 tasks across three fundamental IR categories.
arXiv Detail & Related papers (2024-01-12T12:10:28Z) - Helping Language Models Learn More: Multi-dimensional Task Prompt for
Few-shot Tuning [36.14688633670085]
We propose MTPrompt, a multi-dimensional task prompt learning method based on task-related object, summary, and task description information.
By automatically building and searching for appropriate prompts, our proposed MTPrompt achieves the best results on few-shot samples setting and five different datasets.
arXiv Detail & Related papers (2023-12-13T10:00:44Z) - The language of prompting: What linguistic properties make a prompt
successful? [13.034603322224548]
LLMs can be prompted to achieve impressive zero-shot or few-shot performance in many NLP tasks.
Yet, we still lack a systematic understanding of how linguistic properties of prompts correlate with task performance.
We investigate both grammatical properties such as mood, tense, aspect and modality, as well as lexico-semantic variation through the use of synonyms.
arXiv Detail & Related papers (2023-11-03T15:03:36Z) - Large Language Model as Attributed Training Data Generator: A Tale of
Diversity and Bias [92.41919689753051]
Large language models (LLMs) have been recently leveraged as training data generators for various natural language processing (NLP) tasks.
We investigate training data generation with diversely attributed prompts, which have the potential to yield diverse and attributed generated data.
We show that attributed prompts outperform simple class-conditional prompts in terms of the resulting model's performance.
arXiv Detail & Related papers (2023-06-28T03:31:31Z) - Visualizing Linguistic Diversity of Text Datasets Synthesized by Large
Language Models [9.808214545408541]
LinguisticLens is a novel inter-active visualization tool for making sense of and analyzing syntactic diversity of datasets.
It supports hierarchical visualization of a text dataset, allowing users to quickly scan for an overview and inspect individual examples.
arXiv Detail & Related papers (2023-05-19T00:53:45Z) - Using Large Language Models to Generate Engaging Captions for Data
Visualizations [51.98253121636079]
Large language models (LLM) use sophisticated deep learning technology to produce human-like prose.
Key challenge lies in designing the most effective prompt for the LLM, a task called prompt engineering.
We report on first experiments using the popular LLM GPT-3 and deliver some promising results.
arXiv Detail & Related papers (2022-12-27T23:56:57Z) - Schema-aware Reference as Prompt Improves Data-Efficient Knowledge Graph
Construction [57.854498238624366]
We propose a retrieval-augmented approach, which retrieves schema-aware Reference As Prompt (RAP) for data-efficient knowledge graph construction.
RAP can dynamically leverage schema and knowledge inherited from human-annotated and weak-supervised data as a prompt for each sample.
arXiv Detail & Related papers (2022-10-19T16:40:28Z) - RLPrompt: Optimizing Discrete Text Prompts With Reinforcement Learning [84.75064077323098]
This paper proposes RLPrompt, an efficient discrete prompt optimization approach with reinforcement learning (RL)
RLPrompt is flexibly applicable to different types of LMs, such as masked gibberish (e.g., grammaBERT) and left-to-right models (e.g., GPTs)
Experiments on few-shot classification and unsupervised text style transfer show superior performance over a wide range of existing finetuning or prompting methods.
arXiv Detail & Related papers (2022-05-25T07:50:31Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.