Exploring the Use of LLMs for Requirements Specification in an IT Consulting Company
- URL: http://arxiv.org/abs/2507.19113v1
- Date: Fri, 25 Jul 2025 09:49:37 GMT
- Title: Exploring the Use of LLMs for Requirements Specification in an IT Consulting Company
- Authors: Liliana Pasquale, Azzurra Ragone, Emanuele Piemontese, Armin Amiri Darban,
- Abstract summary: This paper reports our experience using large language models (LLMs) to automate the requirements specification process.<n>We show that LLMs can help automate and standardize the requirements specification, reducing time and human effort.<n>However, the quality of LLM-generated FDS highly depends on inputs and often requires human revision.
- Score: 0.39563752273706504
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: In practice, requirements specification remains a critical challenge. The knowledge necessary to generate a specification can often be fragmented across diverse sources (e.g., meeting minutes, emails, and high-level product descriptions), making the process cumbersome and time-consuming. In this paper, we report our experience using large language models (LLMs) in an IT consulting company to automate the requirements specification process. In this company, requirements are specified using a Functional Design Specification (FDS), a document that outlines the functional requirements and features of a system, application, or process. We provide LLMs with a summary of the requirements elicitation documents and FDS templates, prompting them to generate Epic FDS (including high-level product descriptions) and user stories, which are subsequently compiled into a complete FDS document. We compared the correctness and quality of the FDS generated by three state-of-the-art LLMs against those produced by human analysts. Our results show that LLMs can help automate and standardize the requirements specification, reducing time and human effort. However, the quality of LLM-generated FDS highly depends on inputs and often requires human revision. Thus, we advocate for a synergistic approach in which an LLM serves as an effective drafting tool while human analysts provide the critical contextual and technical oversight necessary for high-quality requirements engineering (RE) documentation.
Related papers
- AGENTIF: Benchmarking Instruction Following of Large Language Models in Agentic Scenarios [51.46347732659174]
Large Language Models (LLMs) have demonstrated advanced capabilities in real-world agentic applications.<n>AgentIF is the first benchmark for systematically evaluating LLM instruction following ability in agentic scenarios.
arXiv Detail & Related papers (2025-05-22T17:31:10Z) - Extracting Formal Specifications from Documents Using LLMs for Automated Testing [11.129512305353055]
The main approach to defining formal specifications is through manual analysis of software documents.<n>System update further increases the human labor cost to maintain a corresponding formal specification.<n>Recent advances in Large Language Models have demonstrated promising capabilities in natural language understanding.
arXiv Detail & Related papers (2025-04-02T01:58:11Z) - Analysis of LLMs vs Human Experts in Requirements Engineering [0.0]
Large Language Models (LLM) application to software development has been on the subject of code generation.<n>This study compares LLM's ability to elicit requirements of a software system, as compared to that of a human expert in a time-boxed and prompt-boxed study.
arXiv Detail & Related papers (2025-01-31T16:55:17Z) - Exploring LLMs for Verifying Technical System Specifications Against Requirements [41.19948826527649]
The field of knowledge-based requirements engineering (KBRE) aims to support engineers by providing knowledge to assist in the elicitation, validation, and management of system requirements.
The advent of large language models (LLMs) opens new opportunities in the field of KBRE.
This work experimentally investigates the potential of LLMs in requirements verification.
arXiv Detail & Related papers (2024-11-18T13:59:29Z) - SELF-GUIDE: Better Task-Specific Instruction Following via Self-Synthetic Finetuning [70.21358720599821]
Large language models (LLMs) hold the promise of solving diverse tasks when provided with appropriate natural language prompts.
We propose SELF-GUIDE, a multi-stage mechanism in which we synthesize task-specific input-output pairs from the student LLM.
We report an absolute improvement of approximately 15% for classification tasks and 18% for generation tasks in the benchmark's metrics.
arXiv Detail & Related papers (2024-07-16T04:41:58Z) - DOCBENCH: A Benchmark for Evaluating LLM-based Document Reading Systems [99.17123445211115]
We introduce DocBench, a benchmark to evaluate large language model (LLM)-based document reading systems.
Our benchmark involves the recruitment of human annotators and the generation of synthetic questions.
It includes 229 real documents and 1,102 questions, spanning across five different domains and four major types of questions.
arXiv Detail & Related papers (2024-07-15T13:17:42Z) - Can Long-Context Language Models Subsume Retrieval, RAG, SQL, and More? [54.667202878390526]
Long-context language models (LCLMs) have the potential to revolutionize our approach to tasks traditionally reliant on external tools like retrieval systems or databases.
We introduce LOFT, a benchmark of real-world tasks requiring context up to millions of tokens designed to evaluate LCLMs' performance on in-context retrieval and reasoning.
Our findings reveal LCLMs' surprising ability to rival state-of-the-art retrieval and RAG systems, despite never having been explicitly trained for these tasks.
arXiv Detail & Related papers (2024-06-19T00:28:58Z) - Efficient Prompting for LLM-based Generative Internet of Things [88.84327500311464]
Large language models (LLMs) have demonstrated remarkable capacities on various tasks, and integrating the capacities of LLMs into the Internet of Things (IoT) applications has drawn much research attention recently.
Due to security concerns, many institutions avoid accessing state-of-the-art commercial LLM services, requiring the deployment and utilization of open-source LLMs in a local network setting.
We propose a LLM-based Generative IoT (GIoT) system deployed in the local network setting in this study.
arXiv Detail & Related papers (2024-06-14T19:24:00Z) - Requirements are All You Need: From Requirements to Code with LLMs [0.0]
Large language models (LLMs) can be applied to software engineering tasks.
This paper introduces a tailored LLM for automating the generation of code snippets from well-structured requirements documents.
We demonstrate the LLM's proficiency in comprehending intricate user requirements and producing robust design and code solutions.
arXiv Detail & Related papers (2024-06-14T14:57:35Z) - FOFO: A Benchmark to Evaluate LLMs' Format-Following Capability [70.84333325049123]
FoFo is a pioneering benchmark for evaluating large language models' (LLMs) ability to follow complex, domain-specific formats.
This paper presents FoFo, a pioneering benchmark for evaluating large language models' (LLMs) ability to follow complex, domain-specific formats.
arXiv Detail & Related papers (2024-02-28T19:23:27Z) - Towards Generating Executable Metamorphic Relations Using Large Language Models [46.26208489175692]
We propose an approach for automatically deriving executable MRs from requirements using large language models (LLMs)
To assess the feasibility of our approach, we conducted a questionnaire-based survey in collaboration with Siemens Industry Software.
arXiv Detail & Related papers (2024-01-30T13:52:47Z) - Validation of Rigorous Requirements Specifications and Document
Automation with the ITLingo RSL Language [0.0]
ITLingo initiative has introduced a requirements specification language named RSL to enhance the rigor and consistency of technical documentation.
This paper reviews existing research and tools in the fields of requirements validation and document automation.
We propose to extend RSL with validation of specifications based on customized checks, and on linguistic rules dynamically defined in the RSL itself.
arXiv Detail & Related papers (2023-12-17T21:39:26Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.