On Meta-Prompting
- URL: http://arxiv.org/abs/2312.06562v1
- Date: Mon, 11 Dec 2023 17:46:44 GMT
- Title: On Meta-Prompting
- Authors: Adrian de Wynter, Xun Wang, Qilong Gu, Si-Qing Chen
- Abstract summary: We call these approaches meta-prompting, or prompting to obtain prompts.
We propose a theoretical framework based on category theory to generalize and describe them.
- Score: 18.949285430843695
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: Certain statistical models are capable of interpreting input strings as
instructions, or prompts, and carry out tasks based on them. Many approaches to
prompting and pre-training these models involve the automated generation of
these prompts. We call these approaches meta-prompting, or prompting to obtain
prompts. We propose a theoretical framework based on category theory to
generalize and describe them. This framework is flexible enough to account for
LLM stochasticity; and allows us to obtain formal results around task
agnosticity and equivalence of various meta-prompting approaches. We experiment
with meta-prompting in two active areas of model research: creativity and
ideation. We find that user preference favors (p < 0.01) the prompts generated
under meta-prompting, as well as their corresponding outputs, over a series of
hardcoded baseline prompts that include the original task prompt. Using our
framework, we argue that meta-prompting is more effective than basic prompting
at generating desirable outputs.
Related papers
- Generative Prompt Internalization [48.91617280112579]
We propose Generative Prompt Internalization (GenPI), a lightweight method that employs a joint training approach.
GenPI not only replicates the behavior of models with prompt inputs but also generates the content of the prompt.
We demonstrate that our approach effectively internalizes complex prompts across various agent-based application scenarios.
arXiv Detail & Related papers (2024-11-24T17:32:20Z) - Exploring Prompt Engineering Practices in the Enterprise [3.7882262667445734]
A prompt is a natural language instruction designed to elicit certain behaviour or output from a model.
For complex tasks and tasks with specific requirements, prompt design is not trivial.
We analyze sessions of prompt editing behavior, categorizing the parts of prompts users iterated on and the types of changes they made.
arXiv Detail & Related papers (2024-03-13T20:32:32Z) - Effective Structured Prompting by Meta-Learning and Representative Verbalizer [27.64413828719264]
We propose MetaPrompter for effective structured prompting.
We propose a novel soft verbalizer (RepVerb) which constructs label embedding from feature embeddings directly.
Experimental results demonstrate that MetaPrompter performs better than the recent state-of-the-arts.
arXiv Detail & Related papers (2023-06-01T12:44:33Z) - Demystifying Prompts in Language Models via Perplexity Estimation [109.59105230163041]
Performance of a prompt is coupled with the extent to which the model is familiar with the language it contains.
We show that the lower the perplexity of the prompt is, the better the prompt is able to perform the task.
arXiv Detail & Related papers (2022-12-08T02:21:47Z) - TEMPERA: Test-Time Prompting via Reinforcement Learning [57.48657629588436]
We propose Test-time Prompt Editing using Reinforcement learning (TEMPERA)
In contrast to prior prompt generation methods, TEMPERA can efficiently leverage prior knowledge.
Our method achieves 5.33x on average improvement in sample efficiency when compared to the traditional fine-tuning methods.
arXiv Detail & Related papers (2022-11-21T22:38:20Z) - STPrompt: Semantic-guided and Task-driven prompts for Effective Few-shot
Classification [5.6205035780719275]
We propose the STPrompt -Semantic-guided and Task-driven Prompt model.
The proposed model achieves the state-of-the-art performance in five different datasets of few-shot text classification tasks.
arXiv Detail & Related papers (2022-10-29T04:42:30Z) - MetaPrompting: Learning to Learn Better Prompts [52.914694884515534]
We propose a new soft prompting method called MetaPrompting, which adopts the well-recognized model-agnostic meta-learning algorithm.
Extensive experiments show MetaPrompting brings significant improvement on four different datasets.
arXiv Detail & Related papers (2022-09-23T09:01:05Z) - Rationale-Augmented Ensembles in Language Models [53.45015291520658]
We reconsider rationale-augmented prompting for few-shot in-context learning.
We identify rationale sampling in the output space as the key component to robustly improve performance.
We demonstrate that rationale-augmented ensembles achieve more accurate and interpretable results than existing prompting approaches.
arXiv Detail & Related papers (2022-07-02T06:20:57Z) - OpenPrompt: An Open-source Framework for Prompt-learning [59.17869696803559]
We present OpenPrompt, a unified easy-to-use toolkit to conduct prompt-learning over PLMs.
OpenPrompt is a research-friendly framework that is equipped with efficiency, modularity, and extendibility.
arXiv Detail & Related papers (2021-11-03T03:31:14Z) - Prompt Programming for Large Language Models: Beyond the Few-Shot
Paradigm [0.0]
We discuss methods of prompt programming, emphasizing the usefulness of considering prompts through the lens of natural language.
We introduce the idea of a metaprompt that seeds the model to generate its own natural language prompts for a range of tasks.
arXiv Detail & Related papers (2021-02-15T05:27:55Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.