A Systematic Survey of Prompt Engineering in Large Language Models:
Techniques and Applications
- URL: http://arxiv.org/abs/2402.07927v1
- Date: Mon, 5 Feb 2024 19:49:13 GMT
- Title: A Systematic Survey of Prompt Engineering in Large Language Models:
Techniques and Applications
- Authors: Pranab Sahoo, Ayush Kumar Singh, Sriparna Saha, Vinija Jain, Samrat
Mondal, and Aman Chadha
- Abstract summary: This paper provides a structured overview of recent advancements in prompt engineering, categorized by application area.
We provide a summary detailing the prompting methodology, its applications, the models involved, and the datasets utilized.
This systematic analysis enables a better understanding of this rapidly developing field and facilitates future research by illuminating open challenges and opportunities for prompt engineering.
- Score: 11.568575664316143
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Prompt engineering has emerged as an indispensable technique for extending
the capabilities of large language models (LLMs) and vision-language models
(VLMs). This approach leverages task-specific instructions, known as prompts,
to enhance model efficacy without modifying the core model parameters. Rather
than updating the model parameters, prompts allow seamless integration of
pre-trained models into downstream tasks by eliciting desired model behaviors
solely based on the given prompt. Prompts can be natural language instructions
that provide context to guide the model or learned vector representations that
activate relevant knowledge. This burgeoning field has enabled success across
various applications, from question-answering to commonsense reasoning.
However, there remains a lack of systematic organization and understanding of
the diverse prompt engineering methods and techniques. This survey paper
addresses the gap by providing a structured overview of recent advancements in
prompt engineering, categorized by application area. For each prompting
approach, we provide a summary detailing the prompting methodology, its
applications, the models involved, and the datasets utilized. We also delve
into the strengths and limitations of each approach and include a taxonomy
diagram and table summarizing datasets, models, and critical points of each
prompting technique. This systematic analysis enables a better understanding of
this rapidly developing field and facilitates future research by illuminating
open challenges and opportunities for prompt engineering.
Related papers
- Model Merging in LLMs, MLLMs, and Beyond: Methods, Theories, Applications and Opportunities [89.40778301238642]
Model merging is an efficient empowerment technique in the machine learning community.
There is a significant gap in the literature regarding a systematic and thorough review of these techniques.
arXiv Detail & Related papers (2024-08-14T16:58:48Z) - Inference Optimizations for Large Language Models: Effects, Challenges, and Practical Considerations [0.0]
Large language models are ubiquitous in natural language processing because they can adapt to new tasks without retraining.
This literature review focuses on various techniques for reducing resource requirements and compressing large language models.
arXiv Detail & Related papers (2024-08-06T12:07:32Z) - Efficient Prompting Methods for Large Language Models: A Survey [50.171011917404485]
Prompting has become a mainstream paradigm for adapting large language models (LLMs) to specific natural language processing tasks.
This approach brings the additional computational burden of model inference and human effort to guide and control the behavior of LLMs.
We present the basic concepts of prompting, review the advances for efficient prompting, and highlight future research directions.
arXiv Detail & Related papers (2024-04-01T12:19:08Z) - Foundational Models Defining a New Era in Vision: A Survey and Outlook [151.49434496615427]
Vision systems to see and reason about the compositional nature of visual scenes are fundamental to understanding our world.
The models learned to bridge the gap between such modalities coupled with large-scale training data facilitate contextual reasoning, generalization, and prompt capabilities at test time.
The output of such models can be modified through human-provided prompts without retraining, e.g., segmenting a particular object by providing a bounding box, having interactive dialogues by asking questions about an image or video scene or manipulating the robot's behavior through language instructions.
arXiv Detail & Related papers (2023-07-25T17:59:18Z) - A Systematic Survey of Prompt Engineering on Vision-Language Foundation
Models [43.35892536887404]
Prompt engineering involves augmenting a large pre-trained model with task-specific hints, known as prompts, to adapt the model to new tasks.
This paper aims to provide a comprehensive survey of cutting-edge research in prompt engineering on three types of vision-language models.
arXiv Detail & Related papers (2023-07-24T17:58:06Z) - Exploring Large Language Model for Graph Data Understanding in Online
Job Recommendations [63.19448893196642]
We present a novel framework that harnesses the rich contextual information and semantic representations provided by large language models to analyze behavior graphs.
By leveraging this capability, our framework enables personalized and accurate job recommendations for individual users.
arXiv Detail & Related papers (2023-07-10T11:29:41Z) - Iterative Zero-Shot LLM Prompting for Knowledge Graph Construction [104.29108668347727]
This paper proposes an innovative knowledge graph generation approach that leverages the potential of the latest generative large language models.
The approach is conveyed in a pipeline that comprises novel iterative zero-shot and external knowledge-agnostic strategies.
We claim that our proposal is a suitable solution for scalable and versatile knowledge graph construction and may be applied to different and novel contexts.
arXiv Detail & Related papers (2023-07-03T16:01:45Z) - Foundation Models for Natural Language Processing -- Pre-trained
Language Models Integrating Media [0.0]
Foundation Models are pre-trained language models for Natural Language Processing.
They can be applied to a wide range of different media and problem domains, ranging from image and video processing to robot control learning.
This book provides a comprehensive overview of the state of the art in research and applications of Foundation Models.
arXiv Detail & Related papers (2023-02-16T20:42:04Z) - Few-shot Prompting Towards Controllable Response Generation [49.479958672988566]
We first explored the combination of prompting and reinforcement learning (RL) to steer models' generation without accessing any of the models' parameters.
We apply multi-task learning to make the model learn to generalize to new tasks better.
Experiment results show that our proposed method can successfully control several state-of-the-art (SOTA) dialogue models without accessing their parameters.
arXiv Detail & Related papers (2022-06-08T14:48:06Z) - Prompt Programming for Large Language Models: Beyond the Few-Shot
Paradigm [0.0]
We discuss methods of prompt programming, emphasizing the usefulness of considering prompts through the lens of natural language.
We introduce the idea of a metaprompt that seeds the model to generate its own natural language prompts for a range of tasks.
arXiv Detail & Related papers (2021-02-15T05:27:55Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.