Automated Performance Testing Based on Active Deep Learning
- URL: http://arxiv.org/abs/2104.02102v1
- Date: Mon, 5 Apr 2021 18:19:12 GMT
- Title: Automated Performance Testing Based on Active Deep Learning
- Authors: Ali Sedaghatbaf, Mahshid Helali Moghadam and Mehrdad Saadatmand
- Abstract summary: We present an automated test generation method called ACTA for black-box performance testing.
ACTA is based on active learning, which means that it does not require a large set of historical test data to learn about the performance characteristics of the system under test.
We have evaluated ACTA on a benchmark web application, and the experimental results indicate that this method is comparable with random testing.
- Score: 2.179313476241343
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Generating tests that can reveal performance issues in large and complex
software systems within a reasonable amount of time is a challenging task. On
one hand, there are numerous combinations of input data values to explore. On
the other hand, we have a limited test budget to execute tests. What makes this
task even more difficult is the lack of access to source code and the internal
details of these systems. In this paper, we present an automated test
generation method called ACTA for black-box performance testing. ACTA is based
on active learning, which means that it does not require a large set of
historical test data to learn about the performance characteristics of the
system under test. Instead, it dynamically chooses the tests to execute using
uncertainty sampling. ACTA relies on a conditional variant of generative
adversarial networks,and facilitates specifying performance requirements in
terms of conditions and generating tests that address those conditions.We have
evaluated ACTA on a benchmark web application, and the experimental results
indicate that this method is comparable with random testing, and two other
machine learning methods,i.e. PerfXRL and DN.
Related papers
- A System for Automated Unit Test Generation Using Large Language Models and Assessment of Generated Test Suites [1.4563527353943984]
Large Language Models (LLMs) have been applied to various aspects of software development.
We present AgoneTest: an automated system for generating test suites for Java projects.
arXiv Detail & Related papers (2024-08-14T23:02:16Z) - Prompt Optimization with EASE? Efficient Ordering-aware Automated Selection of Exemplars [66.823588073584]
Large language models (LLMs) have shown impressive capabilities in real-world applications.
The quality of these exemplars in the prompt greatly impacts performance.
Existing methods fail to adequately account for the impact of exemplar ordering on the performance.
arXiv Detail & Related papers (2024-05-25T08:23:05Z) - Active Test-Time Adaptation: Theoretical Analyses and An Algorithm [51.84691955495693]
Test-time adaptation (TTA) addresses distribution shifts for streaming test data in unsupervised settings.
We propose the novel problem setting of active test-time adaptation (ATTA) that integrates active learning within the fully TTA setting.
arXiv Detail & Related papers (2024-04-07T22:31:34Z) - Adaptive REST API Testing with Reinforcement Learning [54.68542517176757]
Current testing tools lack efficient exploration mechanisms, treating all operations and parameters equally.
Current tools struggle when response schemas are absent in the specification or exhibit variants.
We present an adaptive REST API testing technique incorporates reinforcement learning to prioritize operations during exploration.
arXiv Detail & Related papers (2023-09-08T20:27:05Z) - Validation of massively-parallel adaptive testing using dynamic control
matching [0.0]
Modern businesses often run many A/B/n tests at the same time and in parallel, and package many content variations into the same messages.
This paper presents a method for disentangling the causal effects of the various tests under conditions of continuous test adaptation.
arXiv Detail & Related papers (2023-05-02T11:28:12Z) - Planning for Sample Efficient Imitation Learning [52.44953015011569]
Current imitation algorithms struggle to achieve high performance and high in-environment sample efficiency simultaneously.
We propose EfficientImitate, a planning-based imitation learning method that can achieve high in-environment sample efficiency and performance simultaneously.
Experimental results show that EI achieves state-of-the-art results in performance and sample efficiency.
arXiv Detail & Related papers (2022-10-18T05:19:26Z) - ALBench: A Framework for Evaluating Active Learning in Object Detection [102.81795062493536]
This paper contributes an active learning benchmark framework named as ALBench for evaluating active learning in object detection.
Developed on an automatic deep model training system, this ALBench framework is easy-to-use, compatible with different active learning algorithms, and ensures the same training and testing protocols.
arXiv Detail & Related papers (2022-07-27T07:46:23Z) - Hybrid Intelligent Testing in Simulation-Based Verification [0.0]
Several millions of tests may be required to achieve coverage goals.
Coverage-Directed Test Selection learns from coverage feedback to bias testing towards the most effective tests.
Novelty-Driven Verification learns to identify and simulate stimuli that differ from previous stimuli.
arXiv Detail & Related papers (2022-05-19T13:22:08Z) - TestRank: Bringing Order into Unlabeled Test Instances for Deep Learning
Tasks [14.547623982073475]
Deep learning systems are notoriously difficult to test and debug.
It is essential to conduct test selection and label only those selected "high quality" bug-revealing test inputs for test cost reduction.
We propose a novel test prioritization technique that brings order into the unlabeled test instances according to their bug-revealing capabilities, namely TestRank.
arXiv Detail & Related papers (2021-05-21T03:41:10Z) - Online GANs for Automatic Performance Testing [0.10312968200748115]
We present a novel algorithm for automatic performance testing that uses an online variant of the Generative Adversarial Network (GAN)
The proposed approach does not require a prior training set or model of the system under test.
We consider that the presented algorithm serves as a proof of concept and we hope that it can spark a research discussion on the application of GANs to test generation.
arXiv Detail & Related papers (2021-04-21T06:03:27Z) - Dynamic Causal Effects Evaluation in A/B Testing with a Reinforcement
Learning Framework [68.96770035057716]
A/B testing is a business strategy to compare a new product with an old one in pharmaceutical, technological, and traditional industries.
This paper introduces a reinforcement learning framework for carrying A/B testing in online experiments.
arXiv Detail & Related papers (2020-02-05T10:25:02Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.