Online GANs for Automatic Performance Testing
- URL: http://arxiv.org/abs/2104.11069v1
- Date: Wed, 21 Apr 2021 06:03:27 GMT
- Title: Online GANs for Automatic Performance Testing
- Authors: Ivan Porres and Hergys Rexha and S\'ebastien Lafond
- Abstract summary: We present a novel algorithm for automatic performance testing that uses an online variant of the Generative Adversarial Network (GAN)
The proposed approach does not require a prior training set or model of the system under test.
We consider that the presented algorithm serves as a proof of concept and we hope that it can spark a research discussion on the application of GANs to test generation.
- Score: 0.10312968200748115
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: In this paper we present a novel algorithm for automatic performance testing
that uses an online variant of the Generative Adversarial Network (GAN) to
optimize the test generation process. The objective of the proposed approach is
to generate, for a given test budget, a test suite containing a high number of
tests revealing performance defects. This is achieved using a GAN to generate
the tests and predict their outcome. This GAN is trained online while
generating and executing the tests. The proposed approach does not require a
prior training set or model of the system under test. We provide an initial
evaluation the algorithm using an example test system, and compare the obtained
results with other possible approaches.
We consider that the presented algorithm serves as a proof of concept and we
hope that it can spark a research discussion on the application of GANs to test
generation.
Related papers
- Introducing Ensemble Machine Learning Algorithms for Automatic Test Case Generation using Learning Based Testing [0.0]
Ensemble methods are powerful machine learning algorithms that combine multiple models to enhance prediction capabilities and reduce generalization errors.
This study aims to systematically investigate the combination of ensemble methods and base classifiers for model inference in a Learning Based Testing (LBT) algorithm to generate fault-detecting test cases for SUTs as a proof of concept.
arXiv Detail & Related papers (2024-09-06T23:24:59Z) - A Comprehensive Survey on Test-Time Adaptation under Distribution Shifts [143.14128737978342]
Test-time adaptation, an emerging paradigm, has the potential to adapt a pre-trained model to unlabeled data during testing, before making predictions.
Recent progress in this paradigm highlights the significant benefits of utilizing unlabeled data for training self-adapted models prior to inference.
arXiv Detail & Related papers (2023-03-27T16:32:21Z) - TTAPS: Test-Time Adaption by Aligning Prototypes using Self-Supervision [70.05605071885914]
We propose a novel modification of the self-supervised training algorithm SwAV that adds the ability to adapt to single test samples.
We show the success of our method on the common benchmark dataset CIFAR10-C.
arXiv Detail & Related papers (2022-05-18T05:43:06Z) - Listen, Adapt, Better WER: Source-free Single-utterance Test-time
Adaptation for Automatic Speech Recognition [65.84978547406753]
Test-time Adaptation aims to adapt the model trained on source domains to yield better predictions for test samples.
Single-Utterance Test-time Adaptation (SUTA) is the first TTA study in speech area to our best knowledge.
arXiv Detail & Related papers (2022-03-27T06:38:39Z) - Machine Learning Testing in an ADAS Case Study Using
Simulation-Integrated Bio-Inspired Search-Based Testing [7.5828169434922]
Deeper generates failure-revealing test scenarios for testing a deep neural network-based lane-keeping system.
In the newly proposed version, we utilize a new set of bio-inspired search algorithms, genetic algorithm (GA), $(mu+lambda)$ and $(mu,lambda)$ evolution strategies (ES), and particle swarm optimization (PSO)
Our evaluation shows the newly proposed test generators in Deeper represent a considerable improvement on the previous version.
arXiv Detail & Related papers (2022-03-22T20:27:40Z) - Boost Test-Time Performance with Closed-Loop Inference [85.43516360332646]
We propose to predict hard-classified test samples in a looped manner to boost the model performance.
We first devise a filtering criterion to identify those hard-classified test samples that need additional inference loops.
For each hard sample, we construct an additional auxiliary learning task based on its original top-$K$ predictions to calibrate the model.
arXiv Detail & Related papers (2022-03-21T10:20:21Z) - Efficient and Effective Generation of Test Cases for Pedestrian
Detection -- Search-based Software Testing of Baidu Apollo in SVL [14.482670650074885]
This paper presents a study on testing pedestrian detection and emergency braking system of the Baidu Apollo autonomous driving platform within the SVL simulator.
We propose an evolutionary automated test generation technique that generates failure-revealing scenarios for Apollo in the SVL environment.
In order to demonstrate the efficiency and effectiveness of our approach, we also report the results from a baseline random generation technique.
arXiv Detail & Related papers (2021-09-16T13:11:53Z) - Group Testing with Non-identical Infection Probabilities [59.96266198512243]
We develop an adaptive group testing algorithm using the set formation method.
We show that our algorithm outperforms the state of the art, and performs close to the entropy lower bound.
arXiv Detail & Related papers (2021-08-27T17:53:25Z) - Automated Performance Testing Based on Active Deep Learning [2.179313476241343]
We present an automated test generation method called ACTA for black-box performance testing.
ACTA is based on active learning, which means that it does not require a large set of historical test data to learn about the performance characteristics of the system under test.
We have evaluated ACTA on a benchmark web application, and the experimental results indicate that this method is comparable with random testing.
arXiv Detail & Related papers (2021-04-05T18:19:12Z) - Distribution-Aware Testing of Neural Networks Using Generative Models [5.618419134365903]
The reliability of software that has a Deep Neural Network (DNN) as a component is urgently important.
We show that three recent testing techniques generate significant number of invalid test inputs.
We propose a technique to incorporate the valid input space of the DNN model under test in the test generation process.
arXiv Detail & Related papers (2021-02-26T17:18:21Z) - Noisy Adaptive Group Testing using Bayesian Sequential Experimental
Design [63.48989885374238]
When the infection prevalence of a disease is low, Dorfman showed 80 years ago that testing groups of people can prove more efficient than testing people individually.
Our goal in this paper is to propose new group testing algorithms that can operate in a noisy setting.
arXiv Detail & Related papers (2020-04-26T23:41:33Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.