Requirements Quality Assurance in Industry: Why, What and How?
- URL: http://arxiv.org/abs/2308.12825v1
- Date: Thu, 24 Aug 2023 14:31:52 GMT
- Title: Requirements Quality Assurance in Industry: Why, What and How?
- Authors: Michael Unterkalmsteiner, Tony Gorschek
- Abstract summary: We propose a taxonomy of requirements quality assurance complexity that characterizes cognitive load of verifying a quality aspect from the human perspective.
Once this taxonomy is realized and validated, it can serve as the basis for a decision framework of automated requirements quality assurance support.
- Score: 3.6142643912711794
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: Context and Motivation: Natural language is the most common form to specify
requirements in industry. The quality of the specification depends on the
capability of the writer to formulate requirements aimed at different
stakeholders: they are an expression of the customer's needs that are used by
analysts, designers and testers. Given this central role of requirements as a
mean to communicate intention, assuring their quality is essential to reduce
misunderstandings that lead to potential waste. Problem: Quality assurance of
requirement specifications is largely a manual effort that requires expertise
and domain knowledge. However, this demanding cognitive process is also
congested by trivial quality issues that should not occur in the first place.
Principal ideas: We propose a taxonomy of requirements quality assurance
complexity that characterizes cognitive load of verifying a quality aspect from
the human perspective, and automation complexity and accuracy from the machine
perspective. Contribution: Once this taxonomy is realized and validated, it can
serve as the basis for a decision framework of automated requirements quality
assurance support.
Related papers
- AI-Generated Image Quality Assessment Based on Task-Specific Prompt and Multi-Granularity Similarity [62.00987205438436]
We propose a novel quality assessment method for AIGIs named TSP-MGS.
It designs task-specific prompts and measures multi-granularity similarity between AIGIs and the prompts.
Experiments on the commonly used AGIQA-1K and AGIQA-3K benchmarks demonstrate the superiority of the proposed TSP-MGS.
arXiv Detail & Related papers (2024-11-25T04:47:53Z) - Q-Ground: Image Quality Grounding with Large Multi-modality Models [61.72022069880346]
We introduce Q-Ground, the first framework aimed at tackling fine-scale visual quality grounding.
Q-Ground combines large multi-modality models with detailed visual quality analysis.
Central to our contribution is the introduction of the QGround-100K dataset.
arXiv Detail & Related papers (2024-07-24T06:42:46Z) - Hybrid LLM: Cost-Efficient and Quality-Aware Query Routing [53.748685766139715]
Large language models (LLMs) excel in most NLP tasks but also require expensive cloud servers for deployment due to their size.
We propose a hybrid inference approach which combines their respective strengths to save cost and maintain quality.
In experiments our approach allows us to make up to 40% fewer calls to the large model, with no drop in response quality.
arXiv Detail & Related papers (2024-04-22T23:06:42Z) - Identifying relevant Factors of Requirements Quality: an industrial Case Study [0.5603839226601395]
We conduct a case study considering data from both interview transcripts and issue reports to identify relevant factors of requirements quality.
The results contribute empirical evidence that (1) strengthens existing requirements engineering theories and (2) advances industry-relevant requirements quality research.
arXiv Detail & Related papers (2024-02-01T13:45:06Z) - Quality Requirements for Code: On the Untapped Potential in
Maintainability Specifications [5.342931064962865]
This position paper proposes a synergistic approach, combining code-oriented research with Requirements Engineering expertise, to create meaningful industrial impact.
Preliminary findings indicate that the established QUPER model, designed for setting quality targets, does not adequately address the unique aspects of maintainability.
arXiv Detail & Related papers (2024-01-19T17:29:12Z) - Towards Verifiable Generation: A Benchmark for Knowledge-aware Language Model Attribution [48.86322922826514]
This paper defines a new task of Knowledge-aware Language Model Attribution (KaLMA)
First, we extend attribution source from unstructured texts to Knowledge Graph (KG), whose rich structures benefit both the attribution performance and working scenarios.
Second, we propose a new Conscious Incompetence" setting considering the incomplete knowledge repository.
Third, we propose a comprehensive automatic evaluation metric encompassing text quality, citation quality, and text citation alignment.
arXiv Detail & Related papers (2023-10-09T11:45:59Z) - Requirements Quality Research: a harmonized Theory, Evaluation, and
Roadmap [4.147594239309427]
High-quality requirements minimize the risk of propagating defects to later stages of the software development life cycle.
This requires a clear definition and understanding of requirements quality.
arXiv Detail & Related papers (2023-09-19T06:27:23Z) - A New Perspective on Evaluation Methods for Explainable Artificial
Intelligence (XAI) [0.0]
We argue that it is best approached in a nuanced way that incorporates resource availability, domain characteristics, and considerations of risk.
This work aims to advance the field of Requirements Engineering for AI.
arXiv Detail & Related papers (2023-07-26T15:15:44Z) - Revisiting the Performance-Explainability Trade-Off in Explainable
Artificial Intelligence (XAI) [0.0]
We argue that it is best approached in a nuanced way that incorporates resource availability, domain characteristics, and considerations of risk.
This work aims to advance the field of Requirements Engineering for AI.
arXiv Detail & Related papers (2023-07-26T15:07:40Z) - Measuring Uncertainty in Translation Quality Evaluation (TQE) [62.997667081978825]
This work carries out motivated research to correctly estimate the confidence intervals citeBrown_etal2001Interval depending on the sample size of the translated text.
The methodology we applied for this work is from Bernoulli Statistical Distribution Modelling (BSDM) and Monte Carlo Sampling Analysis (MCSA)
arXiv Detail & Related papers (2021-11-15T12:09:08Z) - AI Techniques for Software Requirements Prioritization [91.3755431537592]
The prioritization approaches discussed in this paper are based on different Artificial Intelligence (AI) techniques that can help to improve the overall quality of requirements prioritization processes.
arXiv Detail & Related papers (2021-08-02T12:43:00Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.