Feature-oriented Test Case Selection and Prioritization During the Evolution of Highly-Configurable Systems
- URL: http://arxiv.org/abs/2406.15296v1
- Date: Fri, 21 Jun 2024 16:39:10 GMT
- Title: Feature-oriented Test Case Selection and Prioritization During the Evolution of Highly-Configurable Systems
- Authors: Willian D. F. Mendonça, Wesley K. G. Assunção, Silvia R. Vergilio,
- Abstract summary: We introduce FeaTestSelPrio, a feature-oriented test case selection and prioritization approach for HCSs.
Our approach selects a greater number of tests and takes longer to execute than a changed-file-oriented approach, used as baseline.
The prioritization step allows reducing the average test budget in 86% of the failed commits.
- Score: 1.5225153671736202
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Testing Highly Configurable Systems (HCSs) is a challenging task, especially in an evolution scenario where features are added, changed, or removed, which hampers test case selection and prioritization. Existing work is usually based on the variability model, which is not always available or updated. Yet, the few existing approaches rely on links between test cases and changed files (or lines of code), not considering how features are implemented, usually spread over several and unchanged files. To overcome these limitations, we introduce FeaTestSelPrio, a feature-oriented test case selection and prioritization approach for HCSs. The approach links test cases to feature implementations, using HCS pre-processor directives, to select test cases based on features affected by changes in each commit. After, the test cases are prioritized according to the number of features they cover. Our approach selects a greater number of tests and takes longer to execute than a changed-file-oriented approach, used as baseline, but FeaTestSelPrio performs better regarding detected failures. By adding the approach execution time to the execution time of the selected test cases, we reached a reduction of $\approx$50%, in comparison with retest-all. The prioritization step allows reducing the average test budget in 86% of the failed commits.
Related papers
- Prompt Optimization with EASE? Efficient Ordering-aware Automated Selection of Exemplars [66.823588073584]
Large language models (LLMs) have shown impressive capabilities in real-world applications.
The quality of these exemplars in the prompt greatly impacts performance.
Existing methods fail to adequately account for the impact of exemplar ordering on the performance.
arXiv Detail & Related papers (2024-05-25T08:23:05Z) - Towards Explainable Test Case Prioritisation with Learning-to-Rank Models [6.289767078502329]
Test case prioritisation ( TCP) is a critical task in regression testing to ensure quality as software evolves.
We present and discuss scenarios that require different explanations and how the particularities of TCP could influence them.
arXiv Detail & Related papers (2024-05-22T16:11:45Z) - Fine-Grained Assertion-Based Test Selection [6.9290255098776425]
Regression test selection techniques aim at reducing test execution time by selecting only the tests that are affected by code changes.
We propose a novel approach that increases the selection precision by analyzing test code at statement level and treating test assertions as the unit for selection.
arXiv Detail & Related papers (2024-03-24T04:07:30Z) - Align Your Prompts: Test-Time Prompting with Distribution Alignment for
Zero-Shot Generalization [64.62570402941387]
We use a single test sample to adapt multi-modal prompts at test time by minimizing the feature distribution shift to bridge the gap in the test domain.
Our method improves zero-shot top- 1 accuracy beyond existing prompt-learning techniques, with a 3.08% improvement over the baseline MaPLe.
arXiv Detail & Related papers (2023-11-02T17:59:32Z) - On Pitfalls of Test-Time Adaptation [82.8392232222119]
Test-Time Adaptation (TTA) has emerged as a promising approach for tackling the robustness challenge under distribution shifts.
We present TTAB, a test-time adaptation benchmark that encompasses ten state-of-the-art algorithms, a diverse array of distribution shifts, and two evaluation protocols.
arXiv Detail & Related papers (2023-06-06T09:35:29Z) - AdaNPC: Exploring Non-Parametric Classifier for Test-Time Adaptation [64.9230895853942]
Domain generalization can be arbitrarily hard without exploiting target domain information.
Test-time adaptive (TTA) methods are proposed to address this issue.
In this work, we adopt Non-Parametric to perform the test-time Adaptation (AdaNPC)
arXiv Detail & Related papers (2023-04-25T04:23:13Z) - Sequential Kernelized Independence Testing [101.22966794822084]
We design sequential kernelized independence tests inspired by kernelized dependence measures.
We demonstrate the power of our approaches on both simulated and real data.
arXiv Detail & Related papers (2022-12-14T18:08:42Z) - Test2Vec: An Execution Trace Embedding for Test Case Prioritization [12.624724734296342]
Execution traces of test cases can be a good alternative to abstract their behavior for automated testing tasks.
We propose a novel embedding approach, Test2Vec, that maps test execution traces to a latent space.
Results show that our proposed TP improves best alternatives by 41.80% in terms of the median normalized rank of the first failing test case.
arXiv Detail & Related papers (2022-06-28T20:38:36Z) - DeepOrder: Deep Learning for Test Case Prioritization in Continuous
Integration Testing [6.767885381740952]
This work introduces DeepOrder, a deep learning-based model that works on the basis of regression machine learning.
DeepOrder ranks test cases based on the historical record of test executions from any number of previous test cycles.
We experimentally show that deep neural networks, as a simple regression model, can be efficiently used for test case prioritization in continuous integration testing.
arXiv Detail & Related papers (2021-10-14T15:10:38Z) - TestRank: Bringing Order into Unlabeled Test Instances for Deep Learning
Tasks [14.547623982073475]
Deep learning systems are notoriously difficult to test and debug.
It is essential to conduct test selection and label only those selected "high quality" bug-revealing test inputs for test cost reduction.
We propose a novel test prioritization technique that brings order into the unlabeled test instances according to their bug-revealing capabilities, namely TestRank.
arXiv Detail & Related papers (2021-05-21T03:41:10Z) - Noisy Adaptive Group Testing using Bayesian Sequential Experimental
Design [63.48989885374238]
When the infection prevalence of a disease is low, Dorfman showed 80 years ago that testing groups of people can prove more efficient than testing people individually.
Our goal in this paper is to propose new group testing algorithms that can operate in a noisy setting.
arXiv Detail & Related papers (2020-04-26T23:41:33Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.