Reinforcement Learning for Test Case Prioritization
- URL: http://arxiv.org/abs/2012.11364v1
- Date: Fri, 18 Dec 2020 11:08:20 GMT
- Title: Reinforcement Learning for Test Case Prioritization
- Authors: Jo\~ao Lousada, Miguel Ribeiro
- Abstract summary: This paper extends recent studies on applying Reinforcement Learning to optimize testing strategies.
We test its ability to adapt to new environments, by testing it on novel data extracted from a financial institution.
We also studied the impact of using Decision Tree (DT) Approximator as a model for memory representation.
- Score: 0.24366811507669126
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: In modern software engineering, Continuous Integration (CI) has become an
indispensable step towards systematically managing the life cycles of software
development. Large companies struggle with keeping the pipeline updated and
operational, in useful time, due to the large amount of changes and addition of
features, that build on top of each other and have several developers, working
on different platforms. Associated with such software changes, there is always
a strong component of Testing. As teams and projects grow, exhaustive testing
quickly becomes inhibitive, becoming adamant to select the most relevant test
cases earlier, without compromising software quality. This paper extends recent
studies on applying Reinforcement Learning to optimize testing strategies. We
test its ability to adapt to new environments, by testing it on novel data
extracted from a financial institution, yielding a Normalized percentage of
Fault Detection (NAPFD) of over $0.6$ using the Network Approximator and Test
Case Failure Reward. Additionally, we studied the impact of using Decision Tree
(DT) Approximator as a model for memory representation, which failed to produce
significant improvements relative to Artificial Neural Networks.
Related papers
- The Future of Software Testing: AI-Powered Test Case Generation and Validation [0.0]
This paper explores the transformative potential of AI in improving test case generation and validation.
It focuses on its ability to enhance efficiency, accuracy, and scalability in testing processes.
It also addresses key challenges associated with adapting AI for testing, including the need for high quality training data.
arXiv Detail & Related papers (2024-09-09T17:12:40Z) - Which Combination of Test Metrics Can Predict Success of a Software Project? A Case Study in a Year-Long Project Course [1.553083901660282]
Testing plays an important role in securing the success of a software development project.
We investigate whether we can quantify the effects various types of testing have on functional suitability.
arXiv Detail & Related papers (2024-08-22T04:23:51Z) - Leveraging Large Language Models for Efficient Failure Analysis in Game Development [47.618236610219554]
This paper proposes a new approach to automatically identify which change in the code caused a test to fail.
The method leverages Large Language Models (LLMs) to associate error messages with the corresponding code changes causing the failure.
Our approach reaches an accuracy of 71% in our newly created dataset, which comprises issues reported by developers at EA over a period of one year.
arXiv Detail & Related papers (2024-06-11T09:21:50Z) - Fuzzy Inference System for Test Case Prioritization in Software Testing [0.0]
Test case prioritization ( TCP) is a vital strategy to enhance testing efficiency.
This paper introduces a novel fuzzy logic-based approach to automate TCP.
arXiv Detail & Related papers (2024-04-25T08:08:54Z) - Automated Test Case Repair Using Language Models [0.5708902722746041]
Unrepaired broken test cases can degrade test suite quality and disrupt the software development process.
We present TaRGet, a novel approach leveraging pre-trained code language models for automated test case repair.
TaRGet treats test repair as a language translation task, employing a two-step process to fine-tune a language model.
arXiv Detail & Related papers (2024-01-12T18:56:57Z) - TeST: Test-time Self-Training under Distribution Shift [99.68465267994783]
Test-Time Self-Training (TeST) is a technique that takes as input a model trained on some source data and a novel data distribution at test time.
We find that models adapted using TeST significantly improve over baseline test-time adaptation algorithms.
arXiv Detail & Related papers (2022-09-23T07:47:33Z) - Improving the Performance of Robust Control through Event-Triggered
Learning [74.57758188038375]
We propose an event-triggered learning algorithm that decides when to learn in the face of uncertainty in the LQR problem.
We demonstrate improved performance over a robust controller baseline in a numerical example.
arXiv Detail & Related papers (2022-07-28T17:36:37Z) - SUPERNOVA: Automating Test Selection and Defect Prevention in AAA Video
Games Using Risk Based Testing and Machine Learning [62.997667081978825]
Testing video games is an increasingly difficult task as traditional methods fail to scale with growing software systems.
We present SUPERNOVA, a system responsible for test selection and defect prevention while also functioning as an automation hub.
The direct impact of this has been observed to be a reduction in 55% or more testing hours for an undisclosed sports game title.
arXiv Detail & Related papers (2022-03-10T00:47:46Z) - MEMO: Test Time Robustness via Adaptation and Augmentation [131.28104376280197]
We study the problem of test time robustification, i.e., using the test input to improve model robustness.
Recent prior works have proposed methods for test time adaptation, however, they each introduce additional assumptions.
We propose a simple approach that can be used in any test setting where the model is probabilistic and adaptable.
arXiv Detail & Related papers (2021-10-18T17:55:11Z) - Federated Learning with Unreliable Clients: Performance Analysis and
Mechanism Design [76.29738151117583]
Federated Learning (FL) has become a promising tool for training effective machine learning models among distributed clients.
However, low quality models could be uploaded to the aggregator server by unreliable clients, leading to a degradation or even a collapse of training.
We model these unreliable behaviors of clients and propose a defensive mechanism to mitigate such a security risk.
arXiv Detail & Related papers (2021-05-10T08:02:27Z) - Neural Network Embeddings for Test Case Prioritization [0.24366811507669126]
We have developed a new tool called Neural Network Embeeding for Test Case Prioritization (NNE- TCP)
NNE- TCP analyses which files were modified when there was a test status transition and learns relationships between these files and tests by mapping them into multidimensional vectors.
We show for the first time that the connection between modified files and tests is relevant and competitive relative to other traditional methods.
arXiv Detail & Related papers (2020-12-18T10:33:28Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.