Requirements-Based Test Generation: A Comprehensive Survey
- URL: http://arxiv.org/abs/2505.02015v2
- Date: Wed, 07 May 2025 01:20:18 GMT
- Title: Requirements-Based Test Generation: A Comprehensive Survey
- Authors: Zhenzhen Yang, Rubing Huang, Chenhui Cui, Nan Niu, Dave Towey,
- Abstract summary: Requirements-based test generation (RBTG) constructs test cases based on specified requirements.<n>This paper provides a comprehensive survey on RBTG, categorizing requirement types, classifying approaches, investigating types of test cases, summarizing available tools, and analyzing experimental evaluations.
- Score: 7.025861696324118
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: As an important way of assuring software quality, software testing generates and executes test cases to identify software failures. Many strategies have been proposed to guide test-case generation, such as source-code-based approaches and methods based on bug reports. Requirements-based test generation (RBTG) constructs test cases based on specified requirements, aligning with user needs and expectations, without requiring access to the source code. Since its introduction in 1994, there have been many contributions to the development of RBTG, including various approaches, implementations, tools, assessment and evaluation methods, and applications. This paper provides a comprehensive survey on RBTG, categorizing requirement types, classifying approaches, investigating types of test cases, summarizing available tools, and analyzing experimental evaluations. This paper also summarizes the domains and industrial applications of RBTG, and discusses some open research challenges and potential future work.
Related papers
- Coverage-Guided Testing for Deep Learning Models: A Comprehensive Survey [4.797322346441166]
Deep Learning (DL) models are increasingly applied in safety-critical domains.<n>Coverage-guided testing (CGT) has gained prominence as a framework for identifying erroneous or unexpected model behaviors.<n>Existing CGT studies remain methodologically fragmented, limiting the understanding of current advances and emerging trends.
arXiv Detail & Related papers (2025-07-01T07:12:58Z) - AI-Driven Tools in Modern Software Quality Assurance: An Assessment of Benefits, Challenges, and Future Directions [0.0]
The research aims to assess the benefits, challenges, and prospects of integrating modern AI-oriented tools into quality assurance processes.<n>The research demonstrates AI's transformative potential for QA but highlights the importance of a strategic approach to implementing these technologies.
arXiv Detail & Related papers (2025-06-19T20:22:47Z) - Automatic High-Level Test Case Generation using Large Language Models [1.8136446064778242]
Primary challenge is not writing test scripts but aligning testing efforts with business requirements.<n>We constructed a use-case dataset to train/fine-tune models for generating high-level test cases.<n>Our proactive approach strengthens requirement-testing alignment and facilitates early test case generation.
arXiv Detail & Related papers (2025-03-23T09:14:41Z) - Requirements-Driven Automated Software Testing: A Systematic Review [13.67495800498868]
This study synthesizes the current state of REDAST research, highlights trends, and proposes future directions.<n>This systematic literature review ( SLR) explores the landscape of REDAST by analyzing requirements input, transformation techniques, test outcomes, evaluation methods, and existing limitations.
arXiv Detail & Related papers (2025-02-25T23:13:09Z) - Adaptive Testing for LLM-Based Applications: A Diversity-based Approach [15.33985438101206]
We show that diversity-based testing techniques, such as Adaptive Random Testing (ART), can be effectively applied to the testing of prompt templates.<n>Our results, obtained using various implementations that explore several string-based distances, confirm that our approach enables the discovery of failures with reduced testing budgets.
arXiv Detail & Related papers (2025-01-23T08:53:12Z) - System Test Case Design from Requirements Specifications: Insights and Challenges of Using ChatGPT [1.9282110216621835]
This paper explores the effectiveness of using Large Language Models (LLMs) to generate test case designs from Software Requirements Specification (SRS) documents.<n>About 87 percent of the generated test cases were valid, with the remaining 13 percent either not applicable or redundant.
arXiv Detail & Related papers (2024-12-04T20:12:27Z) - TFG: Unified Training-Free Guidance for Diffusion Models [82.14536097768632]
Training-free guidance can generate samples with desirable target properties without additional training.
Existing methods, though effective in various individual applications, often lack theoretical grounding and rigorous testing on extensive benchmarks.
This paper introduces a novel algorithmic framework encompassing existing methods as special cases, unifying the study of training-free guidance into the analysis of an algorithm-agnostic design space.
arXiv Detail & Related papers (2024-09-24T05:31:17Z) - RAGEval: Scenario Specific RAG Evaluation Dataset Generation Framework [66.93260816493553]
This paper introduces RAGEval, a framework designed to assess RAG systems across diverse scenarios.<n>With a focus on factual accuracy, we propose three novel metrics: Completeness, Hallucination, and Irrelevance.<n> Experimental results show that RAGEval outperforms zero-shot and one-shot methods in terms of clarity, safety, conformity, and richness of generated samples.
arXiv Detail & Related papers (2024-08-02T13:35:11Z) - Benchmarking Uncertainty Quantification Methods for Large Language Models with LM-Polygraph [83.90988015005934]
Uncertainty quantification is a key element of machine learning applications.<n>We introduce a novel benchmark that implements a collection of state-of-the-art UQ baselines.<n>We conduct a large-scale empirical investigation of UQ and normalization techniques across eleven tasks, identifying the most effective approaches.
arXiv Detail & Related papers (2024-06-21T20:06:31Z) - Generative Judge for Evaluating Alignment [84.09815387884753]
We propose a generative judge with 13B parameters, Auto-J, designed to address these challenges.
Our model is trained on user queries and LLM-generated responses under massive real-world scenarios.
Experimentally, Auto-J outperforms a series of strong competitors, including both open-source and closed-source models.
arXiv Detail & Related papers (2023-10-09T07:27:15Z) - A Comprehensive Survey on Test-Time Adaptation under Distribution Shifts [117.72709110877939]
Test-time adaptation (TTA) has the potential to adapt a pre-trained model to unlabeled data during testing, before making predictions.<n>We categorize TTA into several distinct groups based on the form of test data, namely, test-time domain adaptation, test-time batch adaptation, and online test-time adaptation.
arXiv Detail & Related papers (2023-03-27T16:32:21Z) - Generalized Coverage Criteria for Combinatorial Sequence Testing [4.807321976136717]
We present a new model-based approach for testing systems that use sequences of actions and assertions as test vectors.
Our solution includes a method for quantifying testing quality, a tool for generating high-quality test suites based on the coverage criteria we propose, and a framework for assessing risks.
arXiv Detail & Related papers (2022-01-03T08:35:28Z) - ADAPQUEST: A Software for Web-Based Adaptive Questionnaires based on
Bayesian Networks [70.79136608657296]
ADAPQUEST is a software tool written in Java for the development of adaptive questionnaires based on Bayesian networks.
It embeds dedicated elicitation strategies to simplify the elicitation of the questionnaire parameters.
An application of this tool for the diagnosis of mental disorders is also discussed.
arXiv Detail & Related papers (2021-12-29T09:50:44Z) - Quality meets Diversity: A Model-Agnostic Framework for Computerized
Adaptive Testing [60.38182654847399]
Computerized Adaptive Testing (CAT) is emerging as a promising testing application in many scenarios.
We propose a novel framework, namely Model-Agnostic Adaptive Testing (MAAT) for CAT solution.
arXiv Detail & Related papers (2021-01-15T06:48:50Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.