Time-based Repair for Asynchronous Wait Flaky Tests in Web Testing
- URL: http://arxiv.org/abs/2305.08592v2
- Date: Fri, 19 May 2023 17:04:51 GMT
- Title: Time-based Repair for Asynchronous Wait Flaky Tests in Web Testing
- Authors: Yu Pei (1), Jeongju Sohn (1), Sarra Habchi (2), Mike Papadakis (1)
((1) University of Luxembourg, (2) Ubisoft)
- Abstract summary: Asynchronous waits are one of the most prevalent root causes of flaky tests in web applications.
We propose TRaf, an automated time-based repair method for asynchronous wait flaky tests.
Our analysis shows that TRaf can suggest a shorter wait time to resolve the test flakiness compared to developer-written fixes.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Asynchronous waits are one of the most prevalent root causes of flaky tests
and a major time-influential factor of web application testing. To investigate
the characteristics of asynchronous wait flaky tests and their fixes in web
testing, we build a dataset of 49 reproducible flaky tests, from 26 open-source
projects, caused by asynchronous waits, along with their corresponding
developer-written fixes. Our study of these flaky tests reveals that in
approximately 63% of them (31 out of 49), developers addressed Asynchronous
Wait flaky tests by adapting the wait time, even for cases where the root
causes lie elsewhere. Based on this finding, we propose TRaf, an automated
time-based repair method for asynchronous wait flaky tests in web applications.
TRaf tackles the flakiness issues by suggesting a proper waiting time for each
asynchronous call in a web application, using code similarity and past change
history. The core insight is that as developers often make similar mistakes
more than once, hints for the efficient wait time exist in the current or past
codebase. Our analysis shows that TRaf can suggest a shorter wait time to
resolve the test flakiness compared to developer-written fixes, reducing the
test execution time by 11.1%. With additional dynamic tuning of the new wait
time, TRaf further reduces the execution time by 20.2%.
Related papers
- Do Test and Environmental Complexity Increase Flakiness? An Empirical Study of SAP HANA [47.29324864511411]
Flaky tests fail seemingly at random without changes to the code.
We study characteristics of tests and the test environment that potentially impact test flakiness.
arXiv Detail & Related papers (2024-09-16T07:52:09Z) - WEFix: Intelligent Automatic Generation of Explicit Waits for Efficient
Web End-to-End Flaky Tests [13.280540531582945]
We propose WEFix, a technique that can automatically generate fix code for UI-based flakiness in web e2e testing.
We evaluate the effectiveness and efficiency of WEFix against 122 web e2e flaky tests from seven popular real-world projects.
arXiv Detail & Related papers (2024-02-15T06:51:53Z) - Taming Timeout Flakiness: An Empirical Study of SAP HANA [47.29324864511411]
Flaky tests negatively affect regression testing because they result in test failures that are not necessarily caused by code changes.
Test timeouts are one contributing factor to such flaky test failures.
Test flakiness rate ranges from 49% to 70%, depending on the number of repeated test executions.
arXiv Detail & Related papers (2024-02-07T20:01:41Z) - The Effects of Computational Resources on Flaky Tests [9.694460778355925]
Flaky tests are tests that nondeterministically pass and fail in unchanged code.
Resource-Affected Flaky Tests indicate that a substantial proportion of flaky-test failures can be avoided by adjusting the resources available when running tests.
arXiv Detail & Related papers (2023-10-18T17:42:58Z) - Do Automatic Test Generation Tools Generate Flaky Tests? [12.813573907094074]
The prevalence and nature of flaky tests produced by test generation tools remain largely unknown.
We generate tests using EvoSuite (Java) and Pynguin (Python) and execute each test 200 times.
Our results show that flakiness is at least as common in generated tests as in developer-written tests.
arXiv Detail & Related papers (2023-10-08T16:44:27Z) - Accelerating Continuous Integration with Parallel Batch Testing [0.0]
Continuous integration at scale is essential to software development.
Various techniques including test selection and prioritization aim to reduce the cost.
This study evaluates parallelization's effect by adjusting the number of test machines.
We propose Dynamic TestCase, enabling new builds to join a batch before full test execution.
arXiv Detail & Related papers (2023-08-25T01:09:31Z) - Test-Time Training on Video Streams [54.07009446207442]
Prior work has established test-time training (TTT) as a general framework to further improve a trained model at test time.
We extend TTT to the streaming setting, where multiple test instances arrive in temporal order.
Online TTT significantly outperforms the fixed-model baseline for four tasks, on three real-world datasets.
arXiv Detail & Related papers (2023-07-11T05:17:42Z) - DELTA: degradation-free fully test-time adaptation [59.74287982885375]
We find that two unfavorable defects are concealed in the prevalent adaptation methodologies like test-time batch normalization (BN) and self-learning.
First, we reveal that the normalization statistics in test-time BN are completely affected by the currently received test samples, resulting in inaccurate estimates.
Second, we show that during test-time adaptation, the parameter update is biased towards some dominant classes.
arXiv Detail & Related papers (2023-01-30T15:54:00Z) - Sequential Kernelized Independence Testing [101.22966794822084]
We design sequential kernelized independence tests inspired by kernelized dependence measures.
We demonstrate the power of our approaches on both simulated and real data.
arXiv Detail & Related papers (2022-12-14T18:08:42Z) - SITA: Single Image Test-time Adaptation [48.789568233682296]
In Test-time Adaptation (TTA), given a model trained on some source data, the goal is to adapt it to make better predictions for test instances from a different distribution.
We consider TTA in a more pragmatic setting which we refer to as SITA (Single Image Test-time Adaptation)
Here, when making each prediction, the model has access only to the given single test instance, rather than a batch of instances.
We propose a novel approach AugBN for the SITA setting that requires only forward-preserving propagation.
arXiv Detail & Related papers (2021-12-04T15:01:35Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.