The Integration of Machine Learning into Automated Test Generation: A
Systematic Mapping Study
- URL: http://arxiv.org/abs/2206.10210v5
- Date: Sun, 16 Apr 2023 03:43:15 GMT
- Title: The Integration of Machine Learning into Automated Test Generation: A
Systematic Mapping Study
- Authors: Afonso Fontes and Gregory Gay
- Abstract summary: We characterize emerging research, examining testing practices, researcher goals, ML techniques applied, evaluation, and challenges.
ML generates input for system, GUI, unit, performance, and testing or improves the performance of existing generation methods.
- Score: 15.016047591601094
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Context: Machine learning (ML) may enable effective automated test
generation.
Objective: We characterize emerging research, examining testing practices,
researcher goals, ML techniques applied, evaluation, and challenges.
Methods: We perform a systematic mapping on a sample of 124 publications.
Results: ML generates input for system, GUI, unit, performance, and
combinatorial testing or improves the performance of existing generation
methods. ML is also used to generate test verdicts, property-based, and
expected output oracles. Supervised learning - often based on neural networks -
and reinforcement learning - often based on Q-learning - are common, and some
publications also employ unsupervised or semi-supervised learning.
(Semi-/Un-)Supervised approaches are evaluated using both traditional testing
metrics and ML-related metrics (e.g., accuracy), while reinforcement learning
is often evaluated using testing metrics tied to the reward function.
Conclusion: Work-to-date shows great promise, but there are open challenges
regarding training data, retraining, scalability, evaluation complexity, ML
algorithms employed - and how they are applied - benchmarks, and replicability.
Our findings can serve as a roadmap and inspiration for researchers in this
field.
Related papers
- Test Wars: A Comparative Study of SBST, Symbolic Execution, and LLM-Based Approaches to Unit Test Generation [11.037212298533069]
Large Language Models (LLMs) have opened up new opportunities to generate tests automatically.
This paper studies automatic test generation approaches based on three tools: EvoSuite for SBST, Kex for symbolic execution, and TestSpark for LLM-based test generation.
Our results show that while LLM-based test generation is promising, it falls behind traditional methods in terms of coverage.
arXiv Detail & Related papers (2025-01-17T13:48:32Z) - The Potential of LLMs in Automating Software Testing: From Generation to Reporting [0.0]
Manual testing, while effective, can be time consuming and costly, leading to an increased demand for automated methods.
Recent advancements in Large Language Models (LLMs) have significantly influenced software engineering.
This paper explores an agent-oriented approach to automated software testing, using LLMs to reduce human intervention and enhance testing efficiency.
arXiv Detail & Related papers (2024-12-31T02:06:46Z) - Enabling Cost-Effective UI Automation Testing with Retrieval-Based LLMs: A Case Study in WeChat [8.80569452545511]
We introduce CAT to create cost-effective UI automation tests for industry apps by combining machine learning and Large Language Models.
CAT then employs machine learning techniques, with LLMs serving as a complementary, to map the target element on the UI screen.
Our evaluations on the WeChat testing dataset demonstrate the CAT's performance and cost-effectiveness, achieving 90% UI automation with $0.34 cost.
arXiv Detail & Related papers (2024-09-12T08:25:33Z) - Benchmarking Uncertainty Quantification Methods for Large Language Models with LM-Polygraph [83.90988015005934]
Uncertainty quantification is a key element of machine learning applications.
We introduce a novel benchmark that implements a collection of state-of-the-art UQ baselines.
We conduct a large-scale empirical investigation of UQ and normalization techniques across eleven tasks, identifying the most effective approaches.
arXiv Detail & Related papers (2024-06-21T20:06:31Z) - MatPlotAgent: Method and Evaluation for LLM-Based Agentic Scientific Data Visualization [86.61052121715689]
MatPlotAgent is a model-agnostic framework designed to automate scientific data visualization tasks.
MatPlotBench is a high-quality benchmark consisting of 100 human-verified test cases.
arXiv Detail & Related papers (2024-02-18T04:28:28Z) - Robust Analysis of Multi-Task Learning Efficiency: New Benchmarks on Light-Weighed Backbones and Effective Measurement of Multi-Task Learning Challenges by Feature Disentanglement [69.51496713076253]
In this paper, we focus on the aforementioned efficiency aspects of existing MTL methods.
We first carry out large-scale experiments of the methods with smaller backbones and on a the MetaGraspNet dataset as a new test ground.
We also propose Feature Disentanglement measure as a novel and efficient identifier of the challenges in MTL.
arXiv Detail & Related papers (2024-02-05T22:15:55Z) - Active Learning Framework to Automate NetworkTraffic Classification [0.0]
The paper presents a novel ActiveLearning Framework (ALF) to address this topic.
ALF provides components that can be used to deploy an activelearning loop and maintain an ALF instance that continuouslyevolves a dataset and ML model.
The resultingsolution is deployable for IP flow-based analysis of high-speed(100 Gb/s) networks.
arXiv Detail & Related papers (2022-10-26T10:15:18Z) - Generalization Properties of Retrieval-based Models [50.35325326050263]
Retrieval-based machine learning methods have enjoyed success on a wide range of problems.
Despite growing literature showcasing the promise of these models, the theoretical underpinning for such models remains underexplored.
We present a formal treatment of retrieval-based models to characterize their generalization ability.
arXiv Detail & Related papers (2022-10-06T00:33:01Z) - Learning continuous models for continuous physics [94.42705784823997]
We develop a test based on numerical analysis theory to validate machine learning models for science and engineering applications.
Our results illustrate how principled numerical analysis methods can be coupled with existing ML training/testing methodologies to validate models for science and engineering applications.
arXiv Detail & Related papers (2022-02-17T07:56:46Z) - ALT-MAS: A Data-Efficient Framework for Active Testing of Machine
Learning Algorithms [58.684954492439424]
We propose a novel framework to efficiently test a machine learning model using only a small amount of labeled test data.
The idea is to estimate the metrics of interest for a model-under-test using Bayesian neural network (BNN)
arXiv Detail & Related papers (2021-04-11T12:14:04Z) - Manifold for Machine Learning Assurance [9.594432031144716]
We propose an analogous approach for machine-learning (ML) systems using an ML technique that extracts from the high-dimensional training data implicitly describing the required system.
It is then harnessed for a range of quality assurance tasks such as test adequacy measurement, test input generation, and runtime monitoring of the target ML system.
Preliminary experiments establish that the proposed manifold-based approach, for test adequacy drives diversity in test data, for test generation yields fault-revealing yet realistic test cases, and for runtime monitoring provides an independent means to assess trustability of the target system's output.
arXiv Detail & Related papers (2020-02-08T11:39:01Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.