Prompt Engineering Guidelines for Using Large Language Models in Requirements Engineering
- URL: http://arxiv.org/abs/2507.03405v1
- Date: Fri, 04 Jul 2025 09:13:50 GMT
- Title: Prompt Engineering Guidelines for Using Large Language Models in Requirements Engineering
- Authors: Krishna Ronanki, Simon Arvidsson, Johan Axell,
- Abstract summary: generative AI models like Large Language Models (LLMs) have demonstrated their utility across various activities, including within Requirements Engineering (RE)<n> Ensuring the quality and accuracy of LLM-generated output is critical, with prompt engineering serving as a key technique to guide model responses.<n>Existing literature provides limited guidance on how prompt engineering can be leveraged, specifically for RE activities.
- Score: 2.867517731896504
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: The rapid emergence of generative AI models like Large Language Models (LLMs) has demonstrated its utility across various activities, including within Requirements Engineering (RE). Ensuring the quality and accuracy of LLM-generated output is critical, with prompt engineering serving as a key technique to guide model responses. However, existing literature provides limited guidance on how prompt engineering can be leveraged, specifically for RE activities. The objective of this study is to explore the applicability of existing prompt engineering guidelines for the effective usage of LLMs within RE. To achieve this goal, we began by conducting a systematic review of primary literature to compile a non-exhaustive list of prompt engineering guidelines. Then, we conducted interviews with RE experts to present the extracted guidelines and gain insights on the advantages and limitations of their application within RE. Our literature review indicates a shortage of prompt engineering guidelines for domain-specific activities, specifically for RE. Our proposed mapping contributes to addressing this shortage. We conclude our study by identifying an important future line of research within this field.
Related papers
- Prompt Engineering for Requirements Engineering: A Literature Review and Roadmap [7.63638387750336]
We present the first roadmap-oriented systematic literature review of Prompt Engineering for RE (PE4RE)<n>To bring order to a fragmented landscape, we propose a hybrid taxonomy that links technique-oriented patterns to task-oriented RE roles.
arXiv Detail & Related papers (2025-07-10T12:02:56Z) - Evaluating Large Language Models for Real-World Engineering Tasks [75.97299249823972]
This paper introduces a curated database comprising over 100 questions derived from authentic, production-oriented engineering scenarios.<n>Using this dataset, we evaluate four state-of-the-art Large Language Models (LLMs)<n>Our results show that LLMs demonstrate strengths in basic temporal and structural reasoning but struggle significantly with abstract reasoning, formal modeling, and context-sensitive engineering logic.
arXiv Detail & Related papers (2025-05-12T14:05:23Z) - From Linguistic Giants to Sensory Maestros: A Survey on Cross-Modal Reasoning with Large Language Models [56.9134620424985]
Cross-modal reasoning (CMR) is increasingly recognized as a crucial capability in the progression toward more sophisticated artificial intelligence systems.
The recent trend of deploying Large Language Models (LLMs) to tackle CMR tasks has marked a new mainstream of approaches for enhancing their effectiveness.
This survey offers a nuanced exposition of current methodologies applied in CMR using LLMs, classifying these into a detailed three-tiered taxonomy.
arXiv Detail & Related papers (2024-09-19T02:51:54Z) - Empowering Few-Shot Relation Extraction with The Integration of Traditional RE Methods and Large Language Models [48.846159555253834]
Few-Shot Relation Extraction (FSRE) appeals to more researchers in Natural Language Processing (NLP)
Recent emergence of Large Language Models (LLMs) has prompted numerous researchers to explore FSRE through In-Context Learning (ICL)
arXiv Detail & Related papers (2024-07-12T03:31:11Z) - A Survey on RAG Meeting LLMs: Towards Retrieval-Augmented Large Language Models [71.25225058845324]
Large Language Models (LLMs) have demonstrated revolutionary abilities in language understanding and generation.
Retrieval-Augmented Generation (RAG) can offer reliable and up-to-date external knowledge.
RA-LLMs have emerged to harness external and authoritative knowledge bases, rather than relying on the model's internal knowledge.
arXiv Detail & Related papers (2024-05-10T02:48:45Z) - LLM Inference Unveiled: Survey and Roofline Model Insights [62.92811060490876]
Large Language Model (LLM) inference is rapidly evolving, presenting a unique blend of opportunities and challenges.
Our survey stands out from traditional literature reviews by not only summarizing the current state of research but also by introducing a framework based on roofline model.
This framework identifies the bottlenecks when deploying LLMs on hardware devices and provides a clear understanding of practical problems.
arXiv Detail & Related papers (2024-02-26T07:33:05Z) - Using Large Language Models for Natural Language Processing Tasks in Requirements Engineering: A Systematic Guideline [2.6644624823848426]
Large Language Models (LLMs) are the cornerstone in automating Requirements Engineering (RE) tasks.
This chapter aims to furnish readers with essential knowledge about LLMs in its initial segment.
It provides a comprehensive guideline tailored for students, researchers, and practitioners on harnessing LLMs to address their specific objectives.
arXiv Detail & Related papers (2024-02-21T14:00:52Z) - Towards Generating Executable Metamorphic Relations Using Large Language Models [46.26208489175692]
We propose an approach for automatically deriving executable MRs from requirements using large language models (LLMs)
To assess the feasibility of our approach, we conducted a questionnaire-based survey in collaboration with Siemens Industry Software.
arXiv Detail & Related papers (2024-01-30T13:52:47Z) - Towards Goal-oriented Prompt Engineering for Large Language Models: A Survey [4.362755917924305]
Large Language Models (LLMs) have shown prominent performance in various downstream tasks.
This paper aims to highlight the limitation of designing prompts based on an anthropomorphic assumption.
arXiv Detail & Related papers (2024-01-25T09:47:55Z) - Tapping the Potential of Large Language Models as Recommender Systems: A Comprehensive Framework and Empirical Analysis [91.5632751731927]
Large Language Models such as ChatGPT have showcased remarkable abilities in solving general tasks.<n>We propose a general framework for utilizing LLMs in recommendation tasks, focusing on the capabilities of LLMs as recommenders.<n>We analyze the impact of public availability, tuning strategies, model architecture, parameter scale, and context length on recommendation results.
arXiv Detail & Related papers (2024-01-10T08:28:56Z) - Unleashing the potential of prompt engineering for large language models [1.6006550105523192]
Review explores the pivotal role of prompt engineering in unleashing the capabilities of Large Language Models (LLMs)<n>Examines both foundational and advanced methodologies of prompt engineering, including techniques such as self-consistency, chain-of-thought, and generated knowledge.<n>Discusses the aspect of AI security, particularly adversarial attacks that exploit vulnerabilities in prompt engineering.
arXiv Detail & Related papers (2023-10-23T09:15:18Z) - Advancing Requirements Engineering through Generative AI: Assessing the
Role of LLMs [10.241642683713467]
Large-language models (LLMs) have shown significant promise in diverse domains, including natural language processing, code generation, and program understanding.
This chapter explores the potential of LLMs in driving Requirements Engineering processes, aiming to improve the efficiency and accuracy of requirements-related tasks.
arXiv Detail & Related papers (2023-10-21T11:29:31Z) - PRISMA-DFLLM: An Extension of PRISMA for Systematic Literature Reviews
using Domain-specific Finetuned Large Language Models [0.0]
This paper proposes an AI-enabled methodological framework that combines the power of Large Language Models (LLMs) with the rigorous reporting guidelines of the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA)
By finetuning LLMs on domain-specific academic papers that have been selected as a result of a rigorous SLR process, the proposed PRISMA-DFLLM reporting guidelines offer the potential to achieve greater efficiency, reusability and scalability.
arXiv Detail & Related papers (2023-06-15T02:52:50Z) - How Can Recommender Systems Benefit from Large Language Models: A Survey [82.06729592294322]
Large language models (LLM) have shown impressive general intelligence and human-like capabilities.
We conduct a comprehensive survey on this research direction from the perspective of the whole pipeline in real-world recommender systems.
arXiv Detail & Related papers (2023-06-09T11:31:50Z) - A Comprehensive Survey on Relation Extraction: Recent Advances and New Frontiers [76.51245425667845]
Relation extraction (RE) involves identifying the relations between entities from underlying content.
Deep neural networks have dominated the field of RE and made noticeable progress.
This survey is expected to facilitate researchers' collaborative efforts to address the challenges of real-world RE systems.
arXiv Detail & Related papers (2023-06-03T08:39:25Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.