VAPI: Vectorization of Algorithm for Performance Improvement
- URL: http://arxiv.org/abs/2308.01269v2
- Date: Mon, 21 Aug 2023 16:55:17 GMT
- Title: VAPI: Vectorization of Algorithm for Performance Improvement
- Authors: Mahmood Yashar and Tarik A. Rashid
- Abstract summary: Vectorization is a technique for converting an algorithm, which operates on a single value at a time to one that operates on a collection of values at a time to execute rapidly.
The vectorization technique also operates by replacing multiple iterations into a single operation, which improves the algorithm's performance in speed.
The objective of this study is to use the vectorization technique on one of the metaheuristic algorithms and compare the results of the vectorized algorithm with the algorithm which is non-vectorized.
- Score: 5.835939416417458
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: This study presents the vectorization of metaheuristic algorithms as the
first stage of vectorized optimization implementation. Vectorization is a
technique for converting an algorithm, which operates on a single value at a
time to one that operates on a collection of values at a time to execute
rapidly. The vectorization technique also operates by replacing multiple
iterations into a single operation, which improves the algorithm's performance
in speed and makes the algorithm simpler and easier to be implemented. It is
important to optimize the algorithm by implementing the vectorization
technique, which improves the program's performance, which requires less time
and can run long-running test functions faster, also execute test functions
that cannot be implemented in non-vectorized algorithms and reduces iterations
and time complexity. Converting to vectorization to operate several values at
once and enhance algorithms' speed and efficiency is a solution for long
running times and complicated algorithms. The objective of this study is to use
the vectorization technique on one of the metaheuristic algorithms and compare
the results of the vectorized algorithm with the algorithm which is
non-vectorized.
Related papers
- Performance Evaluation of Evolutionary Algorithms for Analog Integrated
Circuit Design Optimisation [0.0]
An automated sizing approach for analog circuits is presented in this paper.
A targeted search of the search space has been implemented using a particle generation function and a repair-bounds function.
The algorithms are tuned and modified to converge to a better optimal solution.
arXiv Detail & Related papers (2023-10-19T03:26:36Z) - An Efficient Algorithm for Clustered Multi-Task Compressive Sensing [60.70532293880842]
Clustered multi-task compressive sensing is a hierarchical model that solves multiple compressive sensing tasks.
The existing inference algorithm for this model is computationally expensive and does not scale well in high dimensions.
We propose a new algorithm that substantially accelerates model inference by avoiding the need to explicitly compute these covariance matrices.
arXiv Detail & Related papers (2023-09-30T15:57:14Z) - Accelerating Cutting-Plane Algorithms via Reinforcement Learning
Surrogates [49.84541884653309]
A current standard approach to solving convex discrete optimization problems is the use of cutting-plane algorithms.
Despite the existence of a number of general-purpose cut-generating algorithms, large-scale discrete optimization problems continue to suffer from intractability.
We propose a method for accelerating cutting-plane algorithms via reinforcement learning.
arXiv Detail & Related papers (2023-07-17T20:11:56Z) - Accelerated First-Order Optimization under Nonlinear Constraints [73.2273449996098]
We exploit between first-order algorithms for constrained optimization and non-smooth systems to design a new class of accelerated first-order algorithms.
An important property of these algorithms is that constraints are expressed in terms of velocities instead of sparse variables.
arXiv Detail & Related papers (2023-02-01T08:50:48Z) - Improving the Efficiency of Gradient Descent Algorithms Applied to
Optimization Problems with Dynamical Constraints [3.3722008527102894]
We introduce two block coordinate descent algorithms for solving optimization problems with ordinary differential equations.
The algorithms do not need to implement direct or adjoint sensitivity analysis methods to evaluate loss function gradients.
arXiv Detail & Related papers (2022-08-26T18:26:50Z) - Fast optimal structures generator for parameterized quantum circuits [4.655660925754175]
Current structure optimization algorithms optimize the structure of quantum circuit from scratch for each new task of variational quantum algorithms (VQAs)
We propose a rapid structure optimization algorithm for VQAs which automatically determines the number of quantum gates and directly generates the optimal structures for new tasks.
arXiv Detail & Related papers (2022-01-10T12:19:37Z) - A Fully Single Loop Algorithm for Bilevel Optimization without Hessian
Inverse [121.54116938140754]
We propose a new Hessian inverse free Fully Single Loop Algorithm for bilevel optimization problems.
We show that our algorithm converges with the rate of $O(epsilon-2)$.
arXiv Detail & Related papers (2021-12-09T02:27:52Z) - Machine Learning for Online Algorithm Selection under Censored Feedback [71.6879432974126]
In online algorithm selection (OAS), instances of an algorithmic problem class are presented to an agent one after another, and the agent has to quickly select a presumably best algorithm from a fixed set of candidate algorithms.
For decision problems such as satisfiability (SAT), quality typically refers to the algorithm's runtime.
In this work, we revisit multi-armed bandit algorithms for OAS and discuss their capability of dealing with the problem.
We adapt them towards runtime-oriented losses, allowing for partially censored data while keeping a space- and time-complexity independent of the time horizon.
arXiv Detail & Related papers (2021-09-13T18:10:52Z) - Provably Faster Algorithms for Bilevel Optimization [54.83583213812667]
Bilevel optimization has been widely applied in many important machine learning applications.
We propose two new algorithms for bilevel optimization.
We show that both algorithms achieve the complexity of $mathcalO(epsilon-1.5)$, which outperforms all existing algorithms by the order of magnitude.
arXiv Detail & Related papers (2021-06-08T21:05:30Z) - Quantum Algorithms for Prediction Based on Ridge Regression [0.7612218105739107]
We propose a quantum algorithm based on ridge regression model, which get the optimal fitting parameters.
The proposed algorithm has a wide range of application and the proposed algorithm can be used as a subroutine of other quantum algorithms.
arXiv Detail & Related papers (2021-04-27T11:03:52Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.