Large Language Models (LLMs) for Requirements Engineering (RE): A Systematic Literature Review
- URL: http://arxiv.org/abs/2509.11446v1
- Date: Sun, 14 Sep 2025 21:45:01 GMT
- Title: Large Language Models (LLMs) for Requirements Engineering (RE): A Systematic Literature Review
- Authors: Mohammad Amin Zadenoori, Jacek DÄ…browski, Waad Alhoshan, Liping Zhao, Alessio Ferrari,
- Abstract summary: The study categorizes the literature according to several dimensions, including publication trends, RE activities, prompting strategies, and evaluation methods.<n>Most of the studies focus on using LLMs for requirements elicitation and validation, rather than defect detection and classification.<n>Other artifacts are increasingly considered, including issues from issue tracking systems, regulations, and technical manuals.
- Score: 2.0061679654181392
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Large Language Models (LLMs) are finding applications in numerous domains, and Requirements Engineering (RE) is increasingly benefiting from their capabilities to assist with complex, language-intensive tasks. This paper presents a systematic literature review of 74 primary studies published between 2023 and 2024, examining how LLMs are being applied in RE. The study categorizes the literature according to several dimensions, including publication trends, RE activities, prompting strategies, and evaluation methods. Our findings indicate notable patterns, among which we observe substantial differences compared to previous works leveraging standard Natural Language Processing (NLP) techniques. Most of the studies focus on using LLMs for requirements elicitation and validation, rather than defect detection and classification, which were dominant in the past. Researchers have also broadened their focus and addressed novel tasks, e.g., test generation, exploring the integration of RE with other software engineering (SE) disciplines. Although requirements specifications remain the primary focus, other artifacts are increasingly considered, including issues from issue tracking systems, regulations, and technical manuals. The studies mostly rely on GPT-based models, and often use Zero-shot or Few-shot prompting. They are usually evaluated in controlled environments, with limited use in industry settings and limited integration in complex workflows. Our study outlines important future directions, such as leveraging the potential to expand the influence of RE in SE, exploring less-studied tasks, improving prompting methods, and testing in real-world environments. Our contribution also helps researchers and practitioners use LLMs more effectively in RE, by providing a list of identified tools leveraging LLMs for RE, as well as datasets.
Related papers
- SoK: Potentials and Challenges of Large Language Models for Reverse Engineering [5.603029122508333]
Reverse Engineering (RE) is central to software security, enabling tasks such as vulnerability discovery and malware analysis.<n>Earlier advances in deep learning start to automate parts of RE, particularly for malware detection and vulnerability classification.<n>More recently, a rapidly growing body of work has applied Large Language Models (LLMs) to similar purposes.
arXiv Detail & Related papers (2025-09-26T03:26:51Z) - Prompt Engineering for Requirements Engineering: A Literature Review and Roadmap [7.63638387750336]
We present the first roadmap-oriented systematic literature review of Prompt Engineering for RE (PE4RE)<n>To bring order to a fragmented landscape, we propose a hybrid taxonomy that links technique-oriented patterns to task-oriented RE roles.
arXiv Detail & Related papers (2025-07-10T12:02:56Z) - Prompt Engineering Guidelines for Using Large Language Models in Requirements Engineering [2.867517731896504]
generative AI models like Large Language Models (LLMs) have demonstrated their utility across various activities, including within Requirements Engineering (RE)<n> Ensuring the quality and accuracy of LLM-generated output is critical, with prompt engineering serving as a key technique to guide model responses.<n>Existing literature provides limited guidance on how prompt engineering can be leveraged, specifically for RE activities.
arXiv Detail & Related papers (2025-07-04T09:13:50Z) - Can LLMs Generate Tabular Summaries of Science Papers? Rethinking the Evaluation Protocol [83.90769864167301]
Literature review tables are essential for summarizing and comparing collections of scientific papers.<n>We explore the task of generating tables that best fulfill a user's informational needs given a collection of scientific papers.<n>Our contributions focus on three key challenges encountered in real-world use: (i) User prompts are often under-specified; (ii) Retrieved candidate papers frequently contain irrelevant content; and (iii) Task evaluation should move beyond shallow text similarity techniques.
arXiv Detail & Related papers (2025-04-14T14:52:28Z) - A Survey of Small Language Models [104.80308007044634]
Small Language Models (SLMs) have become increasingly important due to their efficiency and performance to perform various language tasks with minimal computational resources.
We present a comprehensive survey on SLMs, focusing on their architectures, training techniques, and model compression techniques.
arXiv Detail & Related papers (2024-10-25T23:52:28Z) - Retrieval-Enhanced Machine Learning: Synthesis and Opportunities [60.34182805429511]
Retrieval-enhancement can be extended to a broader spectrum of machine learning (ML)
This work introduces a formal framework of this paradigm, Retrieval-Enhanced Machine Learning (REML), by synthesizing the literature in various domains in ML with consistent notations which is missing from the current literature.
The goal of this work is to equip researchers across various disciplines with a comprehensive, formally structured framework of retrieval-enhanced models, thereby fostering interdisciplinary future research.
arXiv Detail & Related papers (2024-07-17T20:01:21Z) - A Survey on RAG Meeting LLMs: Towards Retrieval-Augmented Large Language Models [71.25225058845324]
Large Language Models (LLMs) have demonstrated revolutionary abilities in language understanding and generation.
Retrieval-Augmented Generation (RAG) can offer reliable and up-to-date external knowledge.
RA-LLMs have emerged to harness external and authoritative knowledge bases, rather than relying on the model's internal knowledge.
arXiv Detail & Related papers (2024-05-10T02:48:45Z) - A Systematic Literature Review on Large Language Models for Automated Program Repair [21.140070763968634]
It is challenging for researchers to understand the current achievements, challenges, and potential opportunities.<n>This work provides the first systematic literature review to summarize the applications of Large Language Models in APR between 2020 and 2025.
arXiv Detail & Related papers (2024-05-02T16:55:03Z) - The Efficiency Spectrum of Large Language Models: An Algorithmic Survey [54.19942426544731]
The rapid growth of Large Language Models (LLMs) has been a driving force in transforming various domains.
This paper examines the multi-faceted dimensions of efficiency essential for the end-to-end algorithmic development of LLMs.
arXiv Detail & Related papers (2023-12-01T16:00:25Z) - Large Language Models for Software Engineering: A Systematic Literature Review [34.12458948051519]
Large Language Models (LLMs) have significantly impacted numerous domains, including Software Engineering (SE)
We select and analyze 395 research papers from January 2017 to January 2024 to answer four key research questions (RQs)
From the answers to these RQs, we discuss the current state-of-the-art and trends, identifying gaps in existing research, and flagging promising areas for future study.
arXiv Detail & Related papers (2023-08-21T10:37:49Z) - Information Extraction in Low-Resource Scenarios: Survey and Perspective [56.5556523013924]
Information Extraction seeks to derive structured information from unstructured texts.
This paper presents a review of neural approaches to low-resource IE from emphtraditional and emphLLM-based perspectives.
arXiv Detail & Related papers (2022-02-16T13:44:00Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.