On Test Sequence Generation using Multi-Objective Particle Swarm Optimization
- URL: http://arxiv.org/abs/2404.06568v1
- Date: Tue, 9 Apr 2024 18:35:21 GMT
- Title: On Test Sequence Generation using Multi-Objective Particle Swarm Optimization
- Authors: Zain Iqbal, Kashif Zafar, Aden Iqbal, Ayesha Khan,
- Abstract summary: Software testing is an important and essential part of the software development life cycle.
In the software industry, testing costs can account for about 35% to 40% of the total cost of a software project.
- Score: 0.2999888908665658
- License: http://creativecommons.org/publicdomain/zero/1.0/
- Abstract: Software testing is an important and essential part of the software development life cycle and accounts for almost one-third of system development costs. In the software industry, testing costs can account for about 35% to 40% of the total cost of a software project. Therefore, providing efficient ways to test software is critical to reduce cost, time, and effort. Black-box testing and White-box testing are two essential components of software testing. Black-box testing focuses on the software's functionality, while White-box testing examines its internal structure. These tests contribute significantly to ensuring program coverage, which remains one of the main goals of the software testing paradigm. One of the main problems in this area is the identification of appropriate paths for program coverage, which are referred to as test sequences. Creating an automated and effective test sequence is a challenging task in the software testing process. In the proposed methodology, the challenge of "test sequence generation" is considered a multi-objective optimization problem that includes the Oracle cost and the path, both of which are optimized in a symmetrical manner to achieve optimal software testing. Multi-Objective Particle Swarm Optimization (MOPSO) is used to represent the test sequences with the highest priority and the lowest Oracle cost as optimal. The performance of the implemented approach is compared with the Multi-Objective Firefly Algorithm (MOFA) for generating test sequences. The MOPSO-based solution outperforms the MOFA-based approach and simultaneously provides the optimal solution for both objectives.
Related papers
- CodeDPO: Aligning Code Models with Self Generated and Verified Source Code [52.70310361822519]
We propose CodeDPO, a framework that integrates preference learning into code generation to improve two key code preference factors: code correctness and efficiency.
CodeDPO employs a novel dataset construction method, utilizing a self-generation-and-validation mechanism that simultaneously generates and evaluates code and test cases.
arXiv Detail & Related papers (2024-10-08T01:36:15Z) - Segment-Based Test Case Prioritization: A Multi-objective Approach [8.972346309150199]
Test case prioritization ( TCP) is a cost-efficient solution to schedule test cases in an execution order that maximizes an objective function.
We introduce a multi-objective optimization approach to prioritize UI test cases using evolutionary search algorithms and four coverage criteria.
Our approach significantly outperforms other methods in terms of Average Percentage of Faults Detected (APFD) and APFD with Cost.
arXiv Detail & Related papers (2024-08-01T16:51:01Z) - Fuzzy Inference System for Test Case Prioritization in Software Testing [0.0]
Test case prioritization ( TCP) is a vital strategy to enhance testing efficiency.
This paper introduces a novel fuzzy logic-based approach to automate TCP.
arXiv Detail & Related papers (2024-04-25T08:08:54Z) - Using Machine Learning To Identify Software Weaknesses From Software
Requirement Specifications [49.1574468325115]
This research focuses on finding an efficient machine learning algorithm to identify software weaknesses from requirement specifications.
Keywords extracted using latent semantic analysis help map the CWE categories to PROMISE_exp. Naive Bayes, support vector machine (SVM), decision trees, neural network, and convolutional neural network (CNN) algorithms were tested.
arXiv Detail & Related papers (2023-08-10T13:19:10Z) - FuzzyFlow: Leveraging Dataflow To Find and Squash Program Optimization
Bugs [92.47146416628965]
FuzzyFlow is a fault localization and test case extraction framework designed to test program optimizations.
We leverage dataflow program representations to capture a fully reproducible system state and area-of-effect for optimizations.
To reduce testing time, we design an algorithm for minimizing test inputs, trading off memory for recomputation.
arXiv Detail & Related papers (2023-06-28T13:00:17Z) - LTM: Scalable and Black-box Similarity-based Test Suite Minimization based on Language Models [0.6562256987706128]
Test suites tend to grow when software evolves, making it often infeasible to execute all test cases with the allocated testing budgets.
Test suite minimization (TSM) is employed to improve the efficiency of software testing by removing redundant test cases.
We propose LTM (Language model-based Test suite Minimization), a novel, scalable, and black-box similarity-based TSM approach.
arXiv Detail & Related papers (2023-04-03T22:16:52Z) - Learning Performance-Improving Code Edits [107.21538852090208]
We introduce a framework for adapting large language models (LLMs) to high-level program optimization.
First, we curate a dataset of performance-improving edits made by human programmers of over 77,000 competitive C++ programming submission pairs.
For prompting, we propose retrieval-based few-shot prompting and chain-of-thought, and for finetuning, these include performance-conditioned generation and synthetic data augmentation based on self-play.
arXiv Detail & Related papers (2023-02-15T18:59:21Z) - Evaluating Search-Based Software Microbenchmark Prioritization [6.173678645884399]
This paper empirically evaluate single- and multi-objective search-based microbenchmark prioritization techniques.
We find that search algorithms (SAs) are only competitive with but do not outperform the best greedy, coverage-based baselines.
arXiv Detail & Related papers (2022-11-24T10:45:39Z) - Uncertainty-Aware Search Framework for Multi-Objective Bayesian
Optimization [40.40632890861706]
We consider the problem of multi-objective (MO) blackbox optimization using expensive function evaluations.
We propose a novel uncertainty-aware search framework referred to as USeMO to efficiently select the sequence of inputs for evaluation.
arXiv Detail & Related papers (2022-04-12T16:50:48Z) - Adaptive Sampling for Best Policy Identification in Markov Decision
Processes [79.4957965474334]
We investigate the problem of best-policy identification in discounted Markov Decision (MDPs) when the learner has access to a generative model.
The advantages of state-of-the-art algorithms are discussed and illustrated.
arXiv Detail & Related papers (2020-09-28T15:22:24Z) - Noisy Adaptive Group Testing using Bayesian Sequential Experimental
Design [63.48989885374238]
When the infection prevalence of a disease is low, Dorfman showed 80 years ago that testing groups of people can prove more efficient than testing people individually.
Our goal in this paper is to propose new group testing algorithms that can operate in a noisy setting.
arXiv Detail & Related papers (2020-04-26T23:41:33Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.