Introducing Ensemble Machine Learning Algorithms for Automatic Test Case Generation using Learning Based Testing
- URL: http://arxiv.org/abs/2409.04651v1
- Date: Fri, 6 Sep 2024 23:24:59 GMT
- Title: Introducing Ensemble Machine Learning Algorithms for Automatic Test Case Generation using Learning Based Testing
- Authors: Sheikh Md. Mushfiqur Rahman, Nasir U. Eisty,
- Abstract summary: Ensemble methods are powerful machine learning algorithms that combine multiple models to enhance prediction capabilities and reduce generalization errors.
This study aims to systematically investigate the combination of ensemble methods and base classifiers for model inference in a Learning Based Testing (LBT) algorithm to generate fault-detecting test cases for SUTs as a proof of concept.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Ensemble methods are powerful machine learning algorithms that combine multiple models to enhance prediction capabilities and reduce generalization errors. However, their potential to generate effective test cases for fault detection in a System Under Test (SUT) has not been extensively explored. This study aims to systematically investigate the combination of ensemble methods and base classifiers for model inference in a Learning Based Testing (LBT) algorithm to generate fault-detecting test cases for SUTs as a proof of concept. We conduct a series of experiments on functions, generating effective test cases using different ensemble methods and classifier combinations for model inference in our proposed LBT method. We then compare the test suites based on their mutation score. The results indicate that Boosting ensemble methods show overall better performance in generating effective test cases, and the proposed method is performing better than random generation. This analysis helps determine the appropriate ensemble methods for various types of functions. By incorporating ensemble methods into the LBT, this research contributes to the understanding of how to leverage ensemble methods for effective test case generation.
Related papers
- Prompt Optimization with EASE? Efficient Ordering-aware Automated Selection of Exemplars [66.823588073584]
Large language models (LLMs) have shown impressive capabilities in real-world applications.
The quality of these exemplars in the prompt greatly impacts performance.
Existing methods fail to adequately account for the impact of exemplar ordering on the performance.
arXiv Detail & Related papers (2024-05-25T08:23:05Z) - Test Case Recommendations with Distributed Representation of Code
Syntactic Features [2.225268436173329]
We propose an automated approach which exploits both structural and semantic properties of source code methods and test cases.
The proposed approach initially trains a neural network to transform method-level source code, as well as unit tests, into distributed representations.
The model computes cosine similarity between the method's embedding and the previously-embedded training instances.
arXiv Detail & Related papers (2023-10-04T21:42:01Z) - Active Learning Principles for In-Context Learning with Large Language
Models [65.09970281795769]
This paper investigates how Active Learning algorithms can serve as effective demonstration selection methods for in-context learning.
We show that in-context example selection through AL prioritizes high-quality examples that exhibit low uncertainty and bear similarity to the test examples.
arXiv Detail & Related papers (2023-05-23T17:16:04Z) - Multiple Testing Framework for Out-of-Distribution Detection [27.248375922343616]
We study the problem of Out-of-Distribution (OOD) detection, that is, detecting whether a learning algorithm's output can be trusted at inference time.
We propose a definition for the notion of OOD that includes both the input distribution and the learning algorithm, which provides insights for the construction of powerful tests for OOD detection.
arXiv Detail & Related papers (2022-06-20T00:56:01Z) - Learn then Test: Calibrating Predictive Algorithms to Achieve Risk
Control [67.52000805944924]
Learn then Test (LTT) is a framework for calibrating machine learning models.
Our main insight is to reframe the risk-control problem as multiple hypothesis testing.
We use our framework to provide new calibration methods for several core machine learning tasks with detailed worked examples in computer vision.
arXiv Detail & Related papers (2021-10-03T17:42:03Z) - Group Testing with Non-identical Infection Probabilities [59.96266198512243]
We develop an adaptive group testing algorithm using the set formation method.
We show that our algorithm outperforms the state of the art, and performs close to the entropy lower bound.
arXiv Detail & Related papers (2021-08-27T17:53:25Z) - Hybrid Method Based on NARX models and Machine Learning for Pattern
Recognition [0.0]
This work presents a novel technique that integrates the methodologies of machine learning and system identification to solve multiclass problems.
The efficiency of the method was tested by running case studies investigated in machine learning, obtaining better absolute results when compared with classical classification algorithms.
arXiv Detail & Related papers (2021-06-08T00:17:36Z) - Online GANs for Automatic Performance Testing [0.10312968200748115]
We present a novel algorithm for automatic performance testing that uses an online variant of the Generative Adversarial Network (GAN)
The proposed approach does not require a prior training set or model of the system under test.
We consider that the presented algorithm serves as a proof of concept and we hope that it can spark a research discussion on the application of GANs to test generation.
arXiv Detail & Related papers (2021-04-21T06:03:27Z) - An Efficient Model Inference Algorithm for Learning-based Testing of
Reactive Systems [0.0]
Learning-based testing (LBT) is an emerging methodology to automate iterative black-box requirements testing of software systems.
We describe the IKL learning algorithm which is an active incremental learning algorithm for deterministic Kripke structures.
arXiv Detail & Related papers (2020-08-14T09:48:58Z) - Cross-validation Confidence Intervals for Test Error [83.67415139421448]
This work develops central limit theorems for crossvalidation and consistent estimators of its variance under weak stability conditions on the learning algorithm.
Results are the first of their kind for the popular choice of leave-one-out cross-validation.
arXiv Detail & Related papers (2020-07-24T17:40:06Z) - Bloom Origami Assays: Practical Group Testing [90.2899558237778]
Group testing is a well-studied problem with several appealing solutions.
Recent biological studies impose practical constraints for COVID-19 that are incompatible with traditional methods.
We develop a new method combining Bloom filters with belief propagation to scale to larger values of n (more than 100) with good empirical results.
arXiv Detail & Related papers (2020-07-21T19:31:41Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.