Evaluating the Performance of a D-Wave Quantum Annealing System for Feature Subset Selection in Software Defect Prediction
- URL: http://arxiv.org/abs/2410.16469v1
- Date: Mon, 21 Oct 2024 19:58:25 GMT
- Title: Evaluating the Performance of a D-Wave Quantum Annealing System for Feature Subset Selection in Software Defect Prediction
- Authors: Ashis Kumar Mandal, Md Nadim, Chanchal K. Roy, Banani Roy, Kevin A. Schneider,
- Abstract summary: We use a D-Wave Quantum Processing Unit (QPU) solver as a QA solver for feature subset selection.
We evaluate the performance of this approach using multiple software defect from the AEEM, datasets, and NASA projects.
Our experimental results demonstrate that QA-based feature subset selection can enhance software defect prediction.
- Score: 8.626933144631955
- License:
- Abstract: Predicting software defects early in the development process not only enhances the quality and reliability of the software but also decreases the cost of development. A wide range of machine learning techniques can be employed to create software defect prediction models, but the effectiveness and accuracy of these models are often influenced by the choice of appropriate feature subset. Since finding the optimal feature subset is computationally intensive, heuristic and metaheuristic approaches are commonly employed to identify near-optimal solutions within a reasonable time frame. Recently, the quantum computing paradigm quantum annealing (QA) has been deployed to find solutions to complex optimization problems. This opens up the possibility of addressing the feature subset selection problem with a QA machine. Although several strategies have been proposed for feature subset selection using a QA machine, little exploration has been done regarding the viability of a QA machine for feature subset selection in software defect prediction. This study investigates the potential of D-Wave QA system for this task, where we formulate a mutual information (MI)-based filter approach as an optimization problem and utilize a D-Wave Quantum Processing Unit (QPU) solver as a QA solver for feature subset selection. We evaluate the performance of this approach using multiple software defect datasets from the AEEM, JIRA, and NASA projects. We also utilize a D-Wave classical solver for comparative analysis. Our experimental results demonstrate that QA-based feature subset selection can enhance software defect prediction. Although the D-Wave QPU solver exhibits competitive prediction performance with the classical solver in software defect prediction, it significantly reduces the time required to identify the best feature subset compared to its classical counterpart.
Related papers
- Approximating under the Influence of Quantum Noise and Compute Power [3.0302054726041017]
The quantum approximate optimisation algorithm (QAOA) is at the core of many scenarios that aim to combine the power of quantum computers and classical high-performance computing appliances for optimisation.
We study factors that affect solution quality and temporal behaviour of four QAOA variants using comprehensive density-matrix-based simulations.
Our results, accompanied by a comprehensive reproduction package, show strong differences between QAOA variants that can be pinpointed to narrow and specific effects.
arXiv Detail & Related papers (2024-08-05T07:48:49Z) - Analyzing the Effectiveness of Quantum Annealing with Meta-Learning [7.251305766151019]
We propose a new methodology to study the effectiveness of Quantum Annealing (QA) based on meta-learning models.
We build a dataset composed of more than five thousand instances of ten different optimization problems.
We define a set of more than a hundred features to describe their characteristics, and solve them with both QA and three classical solvers.
arXiv Detail & Related papers (2024-08-01T14:03:11Z) - Bayesian Parameterized Quantum Circuit Optimization (BPQCO): A task and hardware-dependent approach [49.89480853499917]
Variational quantum algorithms (VQA) have emerged as a promising quantum alternative for solving optimization and machine learning problems.
In this paper, we experimentally demonstrate the influence of the circuit design on the performance obtained for two classification problems.
We also study the degradation of the obtained circuits in the presence of noise when simulating real quantum computers.
arXiv Detail & Related papers (2024-04-17T11:00:12Z) - A novel feature selection method based on quantum support vector machine [3.6953740776904924]
Feature selection is critical in machine learning to reduce dimensionality and improve model accuracy and efficiency.
We propose a novel method, quantum support vector machine feature selection (QSVMF), integrating quantum support vector machines with genetic algorithm.
We apply QSVMF for feature selection on a breast cancer dataset, comparing the performance of QSVMF against classical approaches.
arXiv Detail & Related papers (2023-11-29T14:08:26Z) - Pointer Networks with Q-Learning for Combinatorial Optimization [55.2480439325792]
We introduce the Pointer Q-Network (PQN), a hybrid neural architecture that integrates model-free Q-value policy approximation with Pointer Networks (Ptr-Nets)
Our empirical results demonstrate the efficacy of this approach, also testing the model in unstable environments.
arXiv Detail & Related papers (2023-11-05T12:03:58Z) - Probabilistic Sampling of Balanced K-Means using Adiabatic Quantum Computing [93.83016310295804]
AQCs allow to implement problems of research interest, which has sparked the development of quantum representations for computer vision tasks.
In this work, we explore the potential of using this information for probabilistic balanced k-means clustering.
Instead of discarding non-optimal solutions, we propose to use them to compute calibrated posterior probabilities with little additional compute cost.
This allows us to identify ambiguous solutions and data points, which we demonstrate on a D-Wave AQC on synthetic tasks and real visual data.
arXiv Detail & Related papers (2023-10-18T17:59:45Z) - A Review on Quantum Approximate Optimization Algorithm and its Variants [47.89542334125886]
The Quantum Approximate Optimization Algorithm (QAOA) is a highly promising variational quantum algorithm that aims to solve intractable optimization problems.
This comprehensive review offers an overview of the current state of QAOA, encompassing its performance analysis in diverse scenarios.
We conduct a comparative study of selected QAOA extensions and variants, while exploring future prospects and directions for the algorithm.
arXiv Detail & Related papers (2023-06-15T15:28:12Z) - Feature Selection for Classification with QAOA [11.516147824168732]
Feature selection is of great importance in Machine Learning, where it can be used to reduce the dimensionality of classification, ranking and prediction problems.
We consider in particular a quadratic feature selection problem that can be tackled with the Approximate Quantum Algorithm Optimization (QAOA), already employed in optimization.
In our experiments, we consider seven different real-world datasets with dimensionality up to 21 and run QAOA on both a quantum simulator and, for small datasets, the 7-qubit IBM (ibm-perth) quantum computer.
arXiv Detail & Related papers (2022-11-05T09:28:53Z) - Adiabatic Quantum Computing for Multi Object Tracking [170.8716555363907]
Multi-Object Tracking (MOT) is most often approached in the tracking-by-detection paradigm, where object detections are associated through time.
As these optimization problems are often NP-hard, they can only be solved exactly for small instances on current hardware.
We show that our approach is competitive compared with state-of-the-art optimization-based approaches, even when using of-the-shelf integer programming solvers.
arXiv Detail & Related papers (2022-02-17T18:59:20Z) - Machine Learning Techniques for Software Quality Assurance: A Survey [5.33024001730262]
We discuss various approaches in both fault prediction and test case prioritization.
Recent studies deep learning algorithms for fault prediction help to bridge the gap between programs' semantics and fault prediction features.
arXiv Detail & Related papers (2021-04-29T00:37:27Z) - Offline Model-Based Optimization via Normalized Maximum Likelihood
Estimation [101.22379613810881]
We consider data-driven optimization problems where one must maximize a function given only queries at a fixed set of points.
This problem setting emerges in many domains where function evaluation is a complex and expensive process.
We propose a tractable approximation that allows us to scale our method to high-capacity neural network models.
arXiv Detail & Related papers (2021-02-16T06:04:27Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.