Requirements' Characteristics: How do they Impact on Project Budget in a
Systems Engineering Context?
- URL: http://arxiv.org/abs/2310.01395v1
- Date: Mon, 2 Oct 2023 17:53:54 GMT
- Title: Requirements' Characteristics: How do they Impact on Project Budget in a
Systems Engineering Context?
- Authors: Panagiota Chatzipetrou, Michael Unterkalmsteiner, Tony Gorschek
- Abstract summary: Controlling and assuring the quality of natural language requirements (NLRs) is challenging.
We investigated with the Swedish Transportation Agency (STA) to what extent the characteristics of requirements had an influence on change requests and budget changes in the project.
- Score: 3.2872885101161318
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: Background: Requirements engineering is of a principal importance when
starting a new project. However, the number of the requirements involved in a
single project can reach up to thousands. Controlling and assuring the quality
of natural language requirements (NLRs), in these quantities, is challenging.
Aims: In a field study, we investigated with the Swedish Transportation Agency
(STA) to what extent the characteristics of requirements had an influence on
change requests and budget changes in the project. Method: We choose the
following models to characterize system requirements formulated in natural
language: Concern-based Model of Requirements (CMR), Requirements Abstractions
Model (RAM) and Software-Hardware model (SHM). The classification of the NLRs
was conducted by the three authors. The robust statistical measure Fleiss'
Kappa was used to verify the reliability of the results. We used descriptive
statistics, contingency tables, results from the Chi-Square test of association
along with post hoc tests. Finally, a multivariate statistical technique,
Correspondence analysis was used in order to provide a means of displaying a
set of requirements in two-dimensional graphical form. Results: The results
showed that software requirements are associated with less budget cost than
hardware requirements. Moreover, software requirements tend to stay open for a
longer period indicating that they are "harder" to handle. Finally, the more
discussion or interaction on a change request can lower the actual estimated
change request cost. Conclusions: The results lead us to a need to further
investigate the reasons why the software requirements are treated differently
from the hardware requirements, interview the project managers, understand
better the way those requirements are formulated and propose effective ways of
Software management.
Related papers
- AI based Multiagent Approach for Requirements Elicitation and Analysis [3.9422957660677476]
This study empirically investigates the effectiveness of utilizing Large Language Models (LLMs) to automate requirements analysis tasks.
We deployed four models, namely GPT-3.5, GPT-4 Omni, LLaMA3-70, and Mixtral-8B, and conducted experiments to analyze requirements on four real-world projects.
Preliminary results indicate notable variations in task completion among the models.
arXiv Detail & Related papers (2024-08-18T07:23:12Z) - The Art of Saying No: Contextual Noncompliance in Language Models [123.383993700586]
We introduce a comprehensive taxonomy of contextual noncompliance describing when and how models should not comply with user requests.
Our taxonomy spans a wide range of categories including incomplete, unsupported, indeterminate, and humanizing requests.
To test noncompliance capabilities of language models, we use this taxonomy to develop a new evaluation suite of 1000 noncompliance prompts.
arXiv Detail & Related papers (2024-07-02T07:12:51Z) - Can Long-Context Language Models Subsume Retrieval, RAG, SQL, and More? [54.667202878390526]
Long-context language models (LCLMs) have the potential to revolutionize our approach to tasks traditionally reliant on external tools like retrieval systems or databases.
We introduce LOFT, a benchmark of real-world tasks requiring context up to millions of tokens designed to evaluate LCLMs' performance on in-context retrieval and reasoning.
Our findings reveal LCLMs' surprising ability to rival state-of-the-art retrieval and RAG systems, despite never having been explicitly trained for these tasks.
arXiv Detail & Related papers (2024-06-19T00:28:58Z) - PiShield: A PyTorch Package for Learning with Requirements [49.03568411956408]
Deep learning models often struggle to meet safety requirements for their outputs.
In this paper, we introduce PiShield, the first package ever allowing for the integration of the requirements into the neural networks' topology.
arXiv Detail & Related papers (2024-02-28T12:24:27Z) - Stability prediction of the software requirements specification [0.0]
This work presents the Bayesian network Requisites that predicts whether the requirements specification documents have to be revised.
We show how to validate Requisites by means of metrics obtained from a large complex software project.
arXiv Detail & Related papers (2024-01-23T10:40:29Z) - Status Quo and Problems of Requirements Engineering for Machine
Learning: Results from an International Survey [7.164324501049983]
Requirements Engineering (RE) can help address many problems when engineering Machine Learning-enabled systems.
We conducted a survey to gather practitioner insights into the status quo and problems of RE in ML-enabled systems.
We found significant differences in RE practices within ML projects.
arXiv Detail & Related papers (2023-10-10T15:53:50Z) - Simultaneous Machine Translation with Large Language Models [51.470478122113356]
We investigate the possibility of applying Large Language Models to SimulMT tasks.
We conducted experiments using the textttLlama2-7b-chat model on nine different languages from the MUST-C dataset.
The results show that LLM outperforms dedicated MT models in terms of BLEU and LAAL metrics.
arXiv Detail & Related papers (2023-09-13T04:06:47Z) - Technical Report on Neural Language Models and Few-Shot Learning for
Systematic Requirements Processing in MDSE [1.6286277560322266]
This paper is based on the analysis of an open-source set of automotive requirements.
We derive domain-specific language constructs helping us to avoid ambiguities in requirements and increase the level of formality.
arXiv Detail & Related papers (2022-11-16T18:06:25Z) - ROAD-R: The Autonomous Driving Dataset with Logical Requirements [54.608762221119406]
We introduce the ROad event Awareness dataset with logical Requirements (ROAD-R)
ROAD-R is the first publicly available dataset for autonomous driving with requirements expressed as logical constraints.
We show that it is possible to exploit them to create models that (i) have a better performance, and (ii) are guaranteed to be compliant with the requirements themselves.
arXiv Detail & Related papers (2022-10-04T13:22:19Z) - Wizard of Search Engine: Access to Information Through Conversations
with Search Engines [58.53420685514819]
We make efforts to facilitate research on CIS from three aspects.
We formulate a pipeline for CIS with six sub-tasks: intent detection (ID), keyphrase extraction (KE), action prediction (AP), query selection (QS), passage selection (PS) and response generation (RG)
We release a benchmark dataset, called wizard of search engine (WISE), which allows for comprehensive and in-depth research on all aspects of CIS.
arXiv Detail & Related papers (2021-05-18T06:35:36Z) - Deep Learning Models in Software Requirements Engineering [0.0]
We have applied the vanilla sentence autoencoder to the sentence generation task and evaluated its performance.
The generated sentences are not plausible English and contain only a few meaningful words.
We believe that applying the model to a larger dataset may produce significantly better results.
arXiv Detail & Related papers (2021-05-17T12:27:30Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.