On Meta-Prompting
- URL: http://arxiv.org/abs/2312.06562v1
- Date: Mon, 11 Dec 2023 17:46:44 GMT
- Title: On Meta-Prompting
- Authors: Adrian de Wynter, Xun Wang, Qilong Gu, Si-Qing Chen
- Abstract summary: We call these approaches meta-prompting, or prompting to obtain prompts.
We propose a theoretical framework based on category theory to generalize and describe them.
- Score: 18.949285430843695
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: Certain statistical models are capable of interpreting input strings as
instructions, or prompts, and carry out tasks based on them. Many approaches to
prompting and pre-training these models involve the automated generation of
these prompts. We call these approaches meta-prompting, or prompting to obtain
prompts. We propose a theoretical framework based on category theory to
generalize and describe them. This framework is flexible enough to account for
LLM stochasticity; and allows us to obtain formal results around task
agnosticity and equivalence of various meta-prompting approaches. We experiment
with meta-prompting in two active areas of model research: creativity and
ideation. We find that user preference favors (p < 0.01) the prompts generated
under meta-prompting, as well as their corresponding outputs, over a series of
hardcoded baseline prompts that include the original task prompt. Using our
framework, we argue that meta-prompting is more effective than basic prompting
at generating desirable outputs.
Related papers
- Prompt Exploration with Prompt Regression [38.847668543140315]
We propose a framework, Prompt Exploration with Prompt Regression (PEPR), to predict the effect of prompt combinations given results for individual prompt elements.
We evaluate our approach with open-source LLMs of different sizes on several different tasks.
arXiv Detail & Related papers (2024-05-17T20:30:49Z) - Efficient Prompting Methods for Large Language Models: A Survey [50.171011917404485]
Prompting has become a mainstream paradigm for adapting large language models (LLMs) to specific natural language processing tasks.
This approach brings the additional computational burden of model inference and human effort to guide and control the behavior of LLMs.
We present the basic concepts of prompting, review the advances for efficient prompting, and highlight future research directions.
arXiv Detail & Related papers (2024-04-01T12:19:08Z) - Exploring Prompt Engineering Practices in the Enterprise [3.7882262667445734]
A prompt is a natural language instruction designed to elicit certain behaviour or output from a model.
For complex tasks and tasks with specific requirements, prompt design is not trivial.
We analyze sessions of prompt editing behavior, categorizing the parts of prompts users iterated on and the types of changes they made.
arXiv Detail & Related papers (2024-03-13T20:32:32Z) - Meta Prompting for AI Systems [12.304069891580658]
We present a comprehensive study of Meta Prompting (MP), an innovative technique reshaping the utilization of language models (LMs) and AI systems in problem-solving and data interaction.
MP emphasizes the structure and syntax of information over traditional content-centric methods.
We show how it effectively deconstructs intricate problems into simpler sub-problems, enhancing token efficiency, and enabling more equitable problem-solving comparisons.
arXiv Detail & Related papers (2023-11-20T01:51:13Z) - Effective Structured Prompting by Meta-Learning and Representative Verbalizer [27.64413828719264]
We propose MetaPrompter for effective structured prompting.
We propose a novel soft verbalizer (RepVerb) which constructs label embedding from feature embeddings directly.
Experimental results demonstrate that MetaPrompter performs better than the recent state-of-the-arts.
arXiv Detail & Related papers (2023-06-01T12:44:33Z) - Guiding Large Language Models via Directional Stimulus Prompting [114.84930073977672]
We introduce Directional Stimulus Prompting, a novel framework for guiding black-box large language models (LLMs) toward specific desired outputs.
Instead of directly adjusting LLMs, our method employs a small tunable policy model to generate an auxiliary directional stimulus prompt for each input instance.
arXiv Detail & Related papers (2023-02-22T17:44:15Z) - Demystifying Prompts in Language Models via Perplexity Estimation [109.59105230163041]
Performance of a prompt is coupled with the extent to which the model is familiar with the language it contains.
We show that the lower the perplexity of the prompt is, the better the prompt is able to perform the task.
arXiv Detail & Related papers (2022-12-08T02:21:47Z) - STPrompt: Semantic-guided and Task-driven prompts for Effective Few-shot
Classification [5.6205035780719275]
We propose the STPrompt -Semantic-guided and Task-driven Prompt model.
The proposed model achieves the state-of-the-art performance in five different datasets of few-shot text classification tasks.
arXiv Detail & Related papers (2022-10-29T04:42:30Z) - MetaPrompting: Learning to Learn Better Prompts [52.914694884515534]
We propose a new soft prompting method called MetaPrompting, which adopts the well-recognized model-agnostic meta-learning algorithm.
Extensive experiments show MetaPrompting brings significant improvement on four different datasets.
arXiv Detail & Related papers (2022-09-23T09:01:05Z) - RLPrompt: Optimizing Discrete Text Prompts With Reinforcement Learning [84.75064077323098]
This paper proposes RLPrompt, an efficient discrete prompt optimization approach with reinforcement learning (RL)
RLPrompt is flexibly applicable to different types of LMs, such as masked gibberish (e.g., grammaBERT) and left-to-right models (e.g., GPTs)
Experiments on few-shot classification and unsupervised text style transfer show superior performance over a wide range of existing finetuning or prompting methods.
arXiv Detail & Related papers (2022-05-25T07:50:31Z) - OpenPrompt: An Open-source Framework for Prompt-learning [59.17869696803559]
We present OpenPrompt, a unified easy-to-use toolkit to conduct prompt-learning over PLMs.
OpenPrompt is a research-friendly framework that is equipped with efficiency, modularity, and extendibility.
arXiv Detail & Related papers (2021-11-03T03:31:14Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.