A Live Extensible Ontology of Quality Factors for Textual Requirements
- URL: http://arxiv.org/abs/2206.05959v2
- Date: Tue, 07 Jan 2025 08:27:07 GMT
- Title: A Live Extensible Ontology of Quality Factors for Textual Requirements
- Authors: Julian Frattini, Lloyd Montgomery, Jannik Fischbach, Michael Unterkalmsteiner, Daniel Mendez, Davide Fucci,
- Abstract summary: We propose an ontology of quality factors for textual requirements.
It includes a structure framing quality factors and related elements and a central repository and web interface.
We invite fellow researchers to a joint community effort to complete and maintain this knowledge repository.
- Score: 3.91424340393661
- License:
- Abstract: Quality factors like passive voice or sentence length are commonly used in research and practice to evaluate the quality of natural language requirements since they indicate defects in requirements artifacts that potentially propagate to later stages in the development life cycle. However, as a research community, we still lack a holistic perspective on quality factors. This inhibits not only a comprehensive understanding of the existing body of knowledge but also the effective use and evolution of these factors. To this end, we propose an ontology of quality factors for textual requirements, which includes (1) a structure framing quality factors and related elements and (2) a central repository and web interface making these factors publicly accessible and usable. We contribute the first version of both by applying a rigorous ontology development method to 105 eligible primary studies and construct a first version of the repository and interface. We illustrate the usability of the ontology and invite fellow researchers to a joint community effort to complete and maintain this knowledge repository. We envision our ontology to reflect the community's harmonized perception of requirements quality factors, guide reporting of new quality factors, and provide central access to the current body of knowledge.
Related papers
- ConSiDERS-The-Human Evaluation Framework: Rethinking Human Evaluation for Generative Large Language Models [53.00812898384698]
We argue that human evaluation of generative large language models (LLMs) should be a multidisciplinary undertaking.
We highlight how cognitive biases can conflate fluent information and truthfulness, and how cognitive uncertainty affects the reliability of rating scores such as Likert.
We propose the ConSiDERS-The-Human evaluation framework consisting of 6 pillars -- Consistency, Scoring Criteria, Differentiating, User Experience, Responsible, and Scalability.
arXiv Detail & Related papers (2024-05-28T22:45:28Z) - Challenges and Opportunities in Text Generation Explainability [12.089513278445704]
This paper outlines 17 challenges categorized into three groups that arise during the development and assessment of explainability methods.
These challenges encompass issues concerning tokenization, defining explanation similarity, determining token importance and prediction change metrics, the level of human intervention required, and the creation of suitable test datasets.
The paper illustrates how these challenges can be intertwined, showcasing new opportunities for the community.
arXiv Detail & Related papers (2024-05-14T09:44:52Z) - PCQA: A Strong Baseline for AIGC Quality Assessment Based on Prompt Condition [4.125007507808684]
This study proposes an effective AIGC quality assessment (QA) framework.
First, we propose a hybrid prompt encoding method based on a dual-source CLIP (Contrastive Language-Image Pre-Training) text encoder.
Second, we propose an ensemble-based feature mixer module to effectively blend the adapted prompt and vision features.
arXiv Detail & Related papers (2024-04-20T07:05:45Z) - Identifying relevant Factors of Requirements Quality: an industrial Case Study [0.5603839226601395]
We conduct a case study considering data from both interview transcripts and issue reports to identify relevant factors of requirements quality.
The results contribute empirical evidence that (1) strengthens existing requirements engineering theories and (2) advances industry-relevant requirements quality research.
arXiv Detail & Related papers (2024-02-01T13:45:06Z) - DIVKNOWQA: Assessing the Reasoning Ability of LLMs via Open-Domain
Question Answering over Knowledge Base and Text [73.68051228972024]
Large Language Models (LLMs) have exhibited impressive generation capabilities, but they suffer from hallucinations when relying on their internal knowledge.
Retrieval-augmented LLMs have emerged as a potential solution to ground LLMs in external knowledge.
arXiv Detail & Related papers (2023-10-31T04:37:57Z) - Towards Verifiable Generation: A Benchmark for Knowledge-aware Language Model Attribution [48.86322922826514]
This paper defines a new task of Knowledge-aware Language Model Attribution (KaLMA)
First, we extend attribution source from unstructured texts to Knowledge Graph (KG), whose rich structures benefit both the attribution performance and working scenarios.
Second, we propose a new Conscious Incompetence" setting considering the incomplete knowledge repository.
Third, we propose a comprehensive automatic evaluation metric encompassing text quality, citation quality, and text citation alignment.
arXiv Detail & Related papers (2023-10-09T11:45:59Z) - Requirements Quality Research: a harmonized Theory, Evaluation, and
Roadmap [4.147594239309427]
High-quality requirements minimize the risk of propagating defects to later stages of the software development life cycle.
This requires a clear definition and understanding of requirements quality.
arXiv Detail & Related papers (2023-09-19T06:27:23Z) - Requirements Quality Assurance in Industry: Why, What and How? [3.6142643912711794]
We propose a taxonomy of requirements quality assurance complexity that characterizes cognitive load of verifying a quality aspect from the human perspective.
Once this taxonomy is realized and validated, it can serve as the basis for a decision framework of automated requirements quality assurance support.
arXiv Detail & Related papers (2023-08-24T14:31:52Z) - Causal Discovery with Language Models as Imperfect Experts [119.22928856942292]
We consider how expert knowledge can be used to improve the data-driven identification of causal graphs.
We propose strategies for amending such expert knowledge based on consistency properties.
We report a case study, on real data, where a large language model is used as an imperfect expert.
arXiv Detail & Related papers (2023-07-05T16:01:38Z) - Adaptive Contextual Perception: How to Generalize to New Backgrounds and
Ambiguous Objects [75.15563723169234]
We investigate how vision models adaptively use context for out-of-distribution generalization.
We show that models that excel in one setting tend to struggle in the other.
To replicate the generalization abilities of biological vision, computer vision models must have factorized object vs. background representations.
arXiv Detail & Related papers (2023-06-09T15:29:54Z) - Generative Models are Unsupervised Predictors of Page Quality: A
Colossal-Scale Study [86.62171568318716]
Large generative language models such as GPT-2 are well-known for their ability to generate text.
We show that unsupervised predictors of "page quality" emerge, able to detect low quality content without any training.
We conduct extensive qualitative and quantitative analysis over 500 million web articles, making this the largest-scale study ever conducted on the topic.
arXiv Detail & Related papers (2020-08-17T07:13:24Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.