Overview of Test Coverage Criteria for Test Case Generation from Finite
State Machines Modelled as Directed Graphs
- URL: http://arxiv.org/abs/2203.09604v1
- Date: Thu, 17 Mar 2022 20:30:14 GMT
- Title: Overview of Test Coverage Criteria for Test Case Generation from Finite
State Machines Modelled as Directed Graphs
- Authors: Vaclav Rechtberger, Miroslav Bures, Bestoun S. Ahmed
- Abstract summary: Test Coverage criteria are an essential concept for test engineers when generating the test cases from a System Under Test model.
Test Coverage criteria define the number of actions or combinations by which a system is tested.
This study summarized all commonly used test coverage criteria for Finite State Machines and discussed them regarding their subsumption, equivalence, or non-comparability.
- Score: 0.12891210250935145
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Test Coverage criteria are an essential concept for test engineers when
generating the test cases from a System Under Test model. They are routinely
used in test case generation for user interfaces, middleware, and back-end
system parts for software, electronics, or Internet of Things (IoT) systems.
Test Coverage criteria define the number of actions or combinations by which a
system is tested, informally determining a potential "strength" of a test set.
As no previous study summarized all commonly used test coverage criteria for
Finite State Machines and comprehensively discussed them regarding their
subsumption, equivalence, or non-comparability, this paper provides this
overview. In this study, 14 most common test coverage criteria and seven of
their synonyms for Finite State Machines defined via a directed graph are
summarized and compared. The results give researchers and industry testing
engineers a helpful overview when setting a software-based or IoT system test
strategy.
Related papers
- Context-Aware Testing: A New Paradigm for Model Testing with Large Language Models [49.06068319380296]
We introduce context-aware testing (CAT) which uses context as an inductive bias to guide the search for meaningful model failures.
We instantiate the first CAT system, SMART Testing, which employs large language models to hypothesize relevant and likely failures.
arXiv Detail & Related papers (2024-10-31T15:06:16Z) - Testing Resource Isolation for System-on-Chip Architectures [0.9176056742068811]
Ensuring resource isolation at the hardware level is a crucial step towards more security inside the Internet of Things.
We illustrate the modeling aspects in test generation for resource isolation, namely modeling the behavior and expressing the intended test scenario.
arXiv Detail & Related papers (2024-03-27T16:11:23Z) - Deep anytime-valid hypothesis testing [29.273915933729057]
We propose a general framework for constructing powerful, sequential hypothesis tests for nonparametric testing problems.
We develop a principled approach of leveraging the representation capability of machine learning models within the testing-by-betting framework.
Empirical results on synthetic and real-world datasets demonstrate that tests instantiated using our general framework are competitive against specialized baselines.
arXiv Detail & Related papers (2023-10-30T09:46:19Z) - A Review of Benchmarks for Visual Defect Detection in the Manufacturing
Industry [63.52264764099532]
We propose a study of existing benchmarks to compare and expose their characteristics and their use-cases.
A study of industrial metrics requirements, as well as testing procedures, will be presented and applied to the studied benchmarks.
arXiv Detail & Related papers (2023-05-05T07:44:23Z) - A Comprehensive Survey on Test-Time Adaptation under Distribution Shifts [143.14128737978342]
Test-time adaptation, an emerging paradigm, has the potential to adapt a pre-trained model to unlabeled data during testing, before making predictions.
Recent progress in this paradigm highlights the significant benefits of utilizing unlabeled data for training self-adapted models prior to inference.
arXiv Detail & Related papers (2023-03-27T16:32:21Z) - BiasTestGPT: Using ChatGPT for Social Bias Testing of Language Models [73.29106813131818]
bias testing is currently cumbersome since the test sentences are generated from a limited set of manual templates or need expensive crowd-sourcing.
We propose using ChatGPT for the controllable generation of test sentences, given any arbitrary user-specified combination of social groups and attributes.
We present an open-source comprehensive bias testing framework (BiasTestGPT), hosted on HuggingFace, that can be plugged into any open-source PLM for bias testing.
arXiv Detail & Related papers (2023-02-14T22:07:57Z) - Sequential Kernelized Independence Testing [101.22966794822084]
We design sequential kernelized independence tests inspired by kernelized dependence measures.
We demonstrate the power of our approaches on both simulated and real data.
arXiv Detail & Related papers (2022-12-14T18:08:42Z) - Prioritized Variable-length Test Cases Generation for Finite State
Machines [0.09786690381850353]
Model-based Testing (MBT) is an effective approach for testing when parts of a system-under-test have the characteristics of a finite state machine (FSM)
This paper presents a test generation strategy that satisfies all these requirements.
Depending on the application of the FSM, the strategy and evaluation presented in this paper are applicable both in testing functional and non-functional software requirements.
arXiv Detail & Related papers (2022-03-17T20:16:45Z) - Complete Agent-driven Model-based System Testing for Autonomous Systems [0.0]
A novel approach to testing complex autonomous transportation systems is described.
It is intended to mitigate some of the most critical problems regarding verification and validation.
arXiv Detail & Related papers (2021-10-25T01:55:24Z) - Pass-Fail Criteria for Scenario-Based Testing of Automated Driving
Systems [0.0]
This paper sets out a framework for assessing an automated driving system's behavioural safety in normal operation.
Risk-based rules cannot give a pass/fail decision from a single test case.
This considers statistical performance across many individual tests.
arXiv Detail & Related papers (2020-05-19T13:13:08Z) - Dynamic Causal Effects Evaluation in A/B Testing with a Reinforcement
Learning Framework [68.96770035057716]
A/B testing is a business strategy to compare a new product with an old one in pharmaceutical, technological, and traditional industries.
This paper introduces a reinforcement learning framework for carrying A/B testing in online experiments.
arXiv Detail & Related papers (2020-02-05T10:25:02Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.