Benchmarking of GPU-optimized Quantum-Inspired Evolutionary Optimization Algorithm using Functional Analysis
- URL: http://arxiv.org/abs/2412.08992v1
- Date: Thu, 12 Dec 2024 06:47:23 GMT
- Title: Benchmarking of GPU-optimized Quantum-Inspired Evolutionary Optimization Algorithm using Functional Analysis
- Authors: Kandula Eswara Sai Kumar, Supreeth B S, Rajas Dalvi, Aman Mittal, Aakif Akhtar, Ferdin Don Bosco, Rut Lineswala, Abhishek Chopra,
- Abstract summary: This article presents a comparative analysis of GPU-parallelized implementations of evolutionary optimization (QIEO)
The results demonstrate that QIEO performs better for these functions than GA.
- Score: 0.0
- License:
- Abstract: This article presents a comparative analysis of GPU-parallelized implementations of the quantum-inspired evolutionary optimization (QIEO) approach and one of the well-known classical metaheuristic techniques, the genetic algorithm (GA). The study assesses the performance of both algorithms on highly non-linear, non-convex, and non-separable function optimization problems, viz., Ackley, Rosenbrock, and Rastrigin, that are representative of the complex real-world optimization problems. The performance of these algorithms is checked by varying the population sizes by keeping all other parameters constant and comparing the fitness value it reached along with the number of function evaluations they required for convergence. The results demonstrate that QIEO performs better for these functions than GA, by achieving the target fitness with fewer function evaluations and significantly reducing the total optimization time approximately three times for the Ackley function and four times for the Rosenbrock and Rastrigin functions. Furthermore, QIEO exhibits greater consistency across trials, with a steady convergence rate that leads to a more uniform number of function evaluations, highlighting its reliability in solving challenging optimization problems. The findings indicate that QIEO is a promising alternative to GA for these kind of functions.
Related papers
- Sample-efficient Bayesian Optimisation Using Known Invariances [56.34916328814857]
We show that vanilla and constrained BO algorithms are inefficient when optimising invariant objectives.
We derive a bound on the maximum information gain of these invariant kernels.
We use our method to design a current drive system for a nuclear fusion reactor, finding a high-performance solution.
arXiv Detail & Related papers (2024-10-22T12:51:46Z) - Global Optimization of Gaussian Process Acquisition Functions Using a Piecewise-Linear Kernel Approximation [2.3342885570554652]
We introduce a piecewise approximation for process kernels and a corresponding MIQP representation for acquisition functions.
We empirically demonstrate the framework on synthetic functions, constrained benchmarks, and hyper tuning tasks.
arXiv Detail & Related papers (2024-10-22T10:56:52Z) - Federated Conditional Stochastic Optimization [110.513884892319]
Conditional optimization has found in a wide range of machine learning tasks, such as in-variant learning tasks, AUPRC, andAML.
This paper proposes algorithms for distributed federated learning.
arXiv Detail & Related papers (2023-10-04T01:47:37Z) - Advancements in Optimization: Adaptive Differential Evolution with
Diversification Strategy [0.0]
The study employs single-objective optimization in a two-dimensional space and runs ADEDS on each of the benchmark functions with multiple iterations.
ADEDS consistently outperforms standard DE for a variety of optimization challenges, including functions with numerous local optima, plate-shaped, valley-shaped, stretched-shaped, and noisy functions.
arXiv Detail & Related papers (2023-10-02T10:05:41Z) - An Empirical Evaluation of Zeroth-Order Optimization Methods on
AI-driven Molecule Optimization [78.36413169647408]
We study the effectiveness of various ZO optimization methods for optimizing molecular objectives.
We show the advantages of ZO sign-based gradient descent (ZO-signGD)
We demonstrate the potential effectiveness of ZO optimization methods on widely used benchmark tasks from the Guacamol suite.
arXiv Detail & Related papers (2022-10-27T01:58:10Z) - Bayesian Optimization over Permutation Spaces [30.650753803587794]
We propose and evaluate two algorithms for BO over Permutation Spaces (BOPS)
We theoretically analyze the performance of BOPS-T to show that their regret grows sub-linearly.
Our experiments on multiple synthetic and real-world benchmarks show that both BOPS-T and BOPS-H perform better than the state-of-the-art BO algorithm for spaces.
arXiv Detail & Related papers (2021-12-02T08:20:50Z) - Performance comparison of optimization methods on variational quantum
algorithms [2.690135599539986]
Variational quantum algorithms (VQAs) offer a promising path towards using near-term quantum hardware for applications in academic and industrial research.
We study the performance of four commonly used gradient-free optimization methods: SLSQP, COBYLA, CMA-ES, and SPSA.
arXiv Detail & Related papers (2021-11-26T12:13:20Z) - Quantum variational optimization: The role of entanglement and problem
hardness [0.0]
We study the role of entanglement, the structure of the variational quantum circuit, and the structure of the optimization problem.
Our numerical results indicate an advantage in adapting the distribution of entangling gates to the problem's topology.
We find evidence that applying conditional value at risk type cost functions improves the optimization, increasing the probability of overlap with the optimal solutions.
arXiv Detail & Related papers (2021-03-26T14:06:54Z) - Bilevel Optimization: Convergence Analysis and Enhanced Design [63.64636047748605]
Bilevel optimization is a tool for many machine learning problems.
We propose a novel stoc-efficientgradient estimator named stoc-BiO.
arXiv Detail & Related papers (2020-10-15T18:09:48Z) - Convergence of adaptive algorithms for weakly convex constrained
optimization [59.36386973876765]
We prove the $mathcaltilde O(t-1/4)$ rate of convergence for the norm of the gradient of Moreau envelope.
Our analysis works with mini-batch size of $1$, constant first and second order moment parameters, and possibly smooth optimization domains.
arXiv Detail & Related papers (2020-06-11T17:43:19Z) - Adaptivity of Stochastic Gradient Methods for Nonconvex Optimization [71.03797261151605]
Adaptivity is an important yet under-studied property in modern optimization theory.
Our algorithm is proved to achieve the best-available convergence for non-PL objectives simultaneously while outperforming existing algorithms for PL objectives.
arXiv Detail & Related papers (2020-02-13T05:42:27Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.