An Empirical Categorization of Prompting Techniques for Large Language
Models: A Practitioner's Guide
- URL: http://arxiv.org/abs/2402.14837v1
- Date: Sun, 18 Feb 2024 23:03:56 GMT
- Title: An Empirical Categorization of Prompting Techniques for Large Language
Models: A Practitioner's Guide
- Authors: Oluwole Fagbohun, Rachel M. Harrison, Anton Dereventsov
- Abstract summary: In this survey, we examine some of the most well-known prompting techniques from both academic and practical viewpoints.
We present an overview of each category, aiming to clarify their unique contributions and showcase their practical applications.
- Score: 0.34530027457862006
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Due to rapid advancements in the development of Large Language Models (LLMs),
programming these models with prompts has recently gained significant
attention. However, the sheer number of available prompt engineering techniques
creates an overwhelming landscape for practitioners looking to utilize these
tools. For the most efficient and effective use of LLMs, it is important to
compile a comprehensive list of prompting techniques and establish a
standardized, interdisciplinary categorization framework. In this survey, we
examine some of the most well-known prompting techniques from both academic and
practical viewpoints and classify them into seven distinct categories. We
present an overview of each category, aiming to clarify their unique
contributions and showcase their practical applications in real-world examples
in order to equip fellow practitioners with a structured framework for
understanding and categorizing prompting techniques tailored to their specific
domains. We believe that this approach will help simplify the complex landscape
of prompt engineering and enable more effective utilization of LLMs in various
applications. By providing practitioners with a systematic approach to prompt
categorization, we aim to assist in navigating the intricacies of effective
prompt design for conversational pre-trained LLMs and inspire new possibilities
in their respective fields.
Related papers
- The Prompt Canvas: A Literature-Based Practitioner Guide for Creating Effective Prompts in Large Language Models [0.0]
This paper argues for the creation of an overarching framework that synthesizes existing methodologies into a cohesive overview for practitioners.
We present the Prompt Canvas, a structured framework resulting from an extensive literature review on prompt engineering.
arXiv Detail & Related papers (2024-12-06T15:35:18Z) - Demystifying Large Language Models for Medicine: A Primer [50.83806796466396]
Large language models (LLMs) represent a transformative class of AI tools capable of revolutionizing various aspects of healthcare.
This tutorial aims to equip healthcare professionals with the tools necessary to effectively integrate LLMs into clinical practice.
arXiv Detail & Related papers (2024-10-24T15:41:56Z) - The Prompt Report: A Systematic Survey of Prompting Techniques [42.618971816813385]
Generative Artificial Intelligence systems are increasingly being deployed across diverse industries and research domains.
prompt engineering suffers from conflicting terminology and a fragmented ontological understanding.
We establish a structured understanding of prompt engineering by assembling a taxonomy of prompting techniques and analyzing their applications.
arXiv Detail & Related papers (2024-06-06T18:10:11Z) - LEARN: Knowledge Adaptation from Large Language Model to Recommendation for Practical Industrial Application [54.984348122105516]
Llm-driven knowlEdge Adaptive RecommeNdation (LEARN) framework synergizes open-world knowledge with collaborative knowledge.
We propose an Llm-driven knowlEdge Adaptive RecommeNdation (LEARN) framework that synergizes open-world knowledge with collaborative knowledge.
arXiv Detail & Related papers (2024-05-07T04:00:30Z) - Towards Generalist Prompting for Large Language Models by Mental Models [105.03747314550591]
Large language models (LLMs) have demonstrated impressive performance on many tasks.
To achieve optimal performance, specially designed prompting methods are still needed.
We introduce the concept of generalist prompting, which operates on the design principle of achieving optimal or near-optimal performance.
arXiv Detail & Related papers (2024-02-28T11:29:09Z) - A Systematic Survey of Prompt Engineering in Large Language Models:
Techniques and Applications [11.568575664316143]
This paper provides a structured overview of recent advancements in prompt engineering, categorized by application area.
We provide a summary detailing the prompting methodology, its applications, the models involved, and the datasets utilized.
This systematic analysis enables a better understanding of this rapidly developing field and facilitates future research by illuminating open challenges and opportunities for prompt engineering.
arXiv Detail & Related papers (2024-02-05T19:49:13Z) - Tapping the Potential of Large Language Models as Recommender Systems: A Comprehensive Framework and Empirical Analysis [91.5632751731927]
Large Language Models such as ChatGPT have showcased remarkable abilities in solving general tasks.
We propose a general framework for utilizing LLMs in recommendation tasks, focusing on the capabilities of LLMs as recommenders.
We analyze the impact of public availability, tuning strategies, model architecture, parameter scale, and context length on recommendation results.
arXiv Detail & Related papers (2024-01-10T08:28:56Z) - A Survey on Prompting Techniques in LLMs [0.0]
Autoregressive Large Language Models have transformed the landscape of Natural Language Processing.
We present a taxonomy of existing literature on prompting techniques and provide a concise survey based on this taxonomy.
We identify some open problems in the realm of prompting in autoregressive LLMs which could serve as a direction for future research.
arXiv Detail & Related papers (2023-11-28T17:56:34Z) - A Practical Survey on Zero-shot Prompt Design for In-context Learning [0.0]
Large language models (LLMs) have brought about significant improvements in Natural Language Processing(NLP) tasks.
This paper presents a comprehensive review of in-context learning techniques, focusing on different types of prompts.
We explore various approaches to prompt design, such as manual design, optimization algorithms, and evaluation methods.
arXiv Detail & Related papers (2023-09-22T23:00:34Z) - LLM-Rec: Personalized Recommendation via Prompting Large Language Models [62.481065357472964]
Large language models (LLMs) have showcased their ability to harness commonsense knowledge and reasoning.
Recent advances in large language models (LLMs) have showcased their remarkable ability to harness commonsense knowledge and reasoning.
This study introduces a novel approach, coined LLM-Rec, which incorporates four distinct prompting strategies of text enrichment for improving personalized text-based recommendations.
arXiv Detail & Related papers (2023-07-24T18:47:38Z) - A Survey on Large Language Models for Recommendation [77.91673633328148]
Large Language Models (LLMs) have emerged as powerful tools in the field of Natural Language Processing (NLP)
This survey presents a taxonomy that categorizes these models into two major paradigms, respectively Discriminative LLM for Recommendation (DLLM4Rec) and Generative LLM for Recommendation (GLLM4Rec)
arXiv Detail & Related papers (2023-05-31T13:51:26Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.