The Prompt Report: A Systematic Survey of Prompting Techniques
- URL: http://arxiv.org/abs/2406.06608v5
- Date: Mon, 30 Dec 2024 19:33:09 GMT
- Title: The Prompt Report: A Systematic Survey of Prompting Techniques
- Authors: Sander Schulhoff, Michael Ilie, Nishant Balepur, Konstantine Kahadze, Amanda Liu, Chenglei Si, Yinheng Li, Aayush Gupta, HyoJung Han, Sevien Schulhoff, Pranav Sandeep Dulepet, Saurav Vidyadhara, Dayeon Ki, Sweta Agrawal, Chau Pham, Gerson Kroiz, Feileen Li, Hudson Tao, Ashay Srivastava, Hevander Da Costa, Saloni Gupta, Megan L. Rogers, Inna Goncearenco, Giuseppe Sarli, Igor Galynker, Denis Peskoff, Marine Carpuat, Jules White, Shyamal Anadkat, Alexander Hoyle, Philip Resnik,
- Abstract summary: Generative Artificial Intelligence systems are increasingly being deployed across diverse industries and research domains.
prompt engineering suffers from conflicting terminology and a fragmented ontological understanding.
We establish a structured understanding of prompt engineering by assembling a taxonomy of prompting techniques and analyzing their applications.
- Score: 42.618971816813385
- License:
- Abstract: Generative Artificial Intelligence (GenAI) systems are increasingly being deployed across diverse industries and research domains. Developers and end-users interact with these systems through the use of prompting and prompt engineering. Although prompt engineering is a widely adopted and extensively researched area, it suffers from conflicting terminology and a fragmented ontological understanding of what constitutes an effective prompt due to its relatively recent emergence. We establish a structured understanding of prompt engineering by assembling a taxonomy of prompting techniques and analyzing their applications. We present a detailed vocabulary of 33 vocabulary terms, a taxonomy of 58 LLM prompting techniques, and 40 techniques for other modalities. Additionally, we provide best practices and guidelines for prompt engineering, including advice for prompting state-of-the-art (SOTA) LLMs such as ChatGPT. We further present a meta-analysis of the entire literature on natural language prefix-prompting. As a culmination of these efforts, this paper presents the most comprehensive survey on prompt engineering to date.
Related papers
- The Prompt Canvas: A Literature-Based Practitioner Guide for Creating Effective Prompts in Large Language Models [0.0]
This paper argues for the creation of an overarching framework that synthesizes existing methodologies into a cohesive overview for practitioners.
We present the Prompt Canvas, a structured framework resulting from an extensive literature review on prompt engineering.
arXiv Detail & Related papers (2024-12-06T15:35:18Z) - Engineering Conversational Search Systems: A Review of Applications, Architectures, and Functional Components [4.262342157729123]
This study investigates the links between theoretical studies and technical implementations of conversational search systems.
We present a layered architecture framework and explain the core functions of conversational search systems.
We reflect on our findings in light of the rapid progress in large language models, discussing their capabilities, limitations, and directions for future research.
arXiv Detail & Related papers (2024-07-01T06:24:11Z) - Efficient Prompting Methods for Large Language Models: A Survey [50.82812214830023]
Efficient Prompting Methods have attracted a wide range of attention.
We discuss Automatic Prompt Engineering for different prompt components and Prompt Compression in continuous and discrete spaces.
arXiv Detail & Related papers (2024-04-01T12:19:08Z) - An Empirical Categorization of Prompting Techniques for Large Language
Models: A Practitioner's Guide [0.34530027457862006]
In this survey, we examine some of the most well-known prompting techniques from both academic and practical viewpoints.
We present an overview of each category, aiming to clarify their unique contributions and showcase their practical applications.
arXiv Detail & Related papers (2024-02-18T23:03:56Z) - A Systematic Survey of Prompt Engineering in Large Language Models:
Techniques and Applications [11.568575664316143]
This paper provides a structured overview of recent advancements in prompt engineering, categorized by application area.
We provide a summary detailing the prompting methodology, its applications, the models involved, and the datasets utilized.
This systematic analysis enables a better understanding of this rapidly developing field and facilitates future research by illuminating open challenges and opportunities for prompt engineering.
arXiv Detail & Related papers (2024-02-05T19:49:13Z) - Intent-based Prompt Calibration: Enhancing prompt optimization with
synthetic boundary cases [2.6159111710501506]
We introduce a new method for automatic prompt engineering, using a calibration process that iteratively refines the prompt to the user intent.
We demonstrate the effectiveness of our method with respect to strong proprietary models on real-world tasks such as moderation and generation.
arXiv Detail & Related papers (2024-02-05T15:28:43Z) - Large Language Models for Generative Information Extraction: A Survey [89.71273968283616]
Large Language Models (LLMs) have demonstrated remarkable capabilities in text understanding and generation.
We present an extensive overview by categorizing these works in terms of various IE subtasks and techniques.
We empirically analyze the most advanced methods and discover the emerging trend of IE tasks with LLMs.
arXiv Detail & Related papers (2023-12-29T14:25:22Z) - Prompt Engineering for Healthcare: Methodologies and Applications [93.63832575498844]
This review will introduce the latest advances in prompt engineering in the field of natural language processing for the medical field.
We will provide the development of prompt engineering and emphasize its significant contributions to healthcare natural language processing applications.
arXiv Detail & Related papers (2023-04-28T08:03:42Z) - TEMPERA: Test-Time Prompting via Reinforcement Learning [57.48657629588436]
We propose Test-time Prompt Editing using Reinforcement learning (TEMPERA)
In contrast to prior prompt generation methods, TEMPERA can efficiently leverage prior knowledge.
Our method achieves 5.33x on average improvement in sample efficiency when compared to the traditional fine-tuning methods.
arXiv Detail & Related papers (2022-11-21T22:38:20Z) - A New Neural Search and Insights Platform for Navigating and Organizing
AI Research [56.65232007953311]
We introduce a new platform, AI Research Navigator, that combines classical keyword search with neural retrieval to discover and organize relevant literature.
We give an overview of the overall architecture of the system and of the components for document analysis, question answering, search, analytics, expert search, and recommendations.
arXiv Detail & Related papers (2020-10-30T19:12:25Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.