Performance Evaluation of Evolutionary Algorithms for Analog Integrated
Circuit Design Optimisation
- URL: http://arxiv.org/abs/2310.12440v1
- Date: Thu, 19 Oct 2023 03:26:36 GMT
- Title: Performance Evaluation of Evolutionary Algorithms for Analog Integrated
Circuit Design Optimisation
- Authors: Ria Rashid, Gopavaram Raghunath, Vasant Badugu, Nandakumar Nambath
- Abstract summary: An automated sizing approach for analog circuits is presented in this paper.
A targeted search of the search space has been implemented using a particle generation function and a repair-bounds function.
The algorithms are tuned and modified to converge to a better optimal solution.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: An automated sizing approach for analog circuits using evolutionary
algorithms is presented in this paper. A targeted search of the search space
has been implemented using a particle generation function and a repair-bounds
function that has resulted in faster convergence to the optimal solution. The
algorithms are tuned and modified to converge to a better optimal solution with
less standard deviation for multiple runs compared to standard versions.
Modified versions of the artificial bee colony optimisation algorithm, genetic
algorithm, grey wolf optimisation algorithm, and particle swarm optimisation
algorithm are tested and compared for the optimal sizing of two operational
amplifier topologies. An extensive performance evaluation of all the modified
algorithms showed that the modifications have resulted in consistent
performance with improved convergence for all the algorithms. The
implementation of parallel computation in the algorithms has reduced run time.
Among the considered algorithms, the modified artificial bee colony
optimisation algorithm gave the most optimal solution with consistent results
across multiple runs.
Related papers
- Accelerating Cutting-Plane Algorithms via Reinforcement Learning
Surrogates [49.84541884653309]
A current standard approach to solving convex discrete optimization problems is the use of cutting-plane algorithms.
Despite the existence of a number of general-purpose cut-generating algorithms, large-scale discrete optimization problems continue to suffer from intractability.
We propose a method for accelerating cutting-plane algorithms via reinforcement learning.
arXiv Detail & Related papers (2023-07-17T20:11:56Z) - Fast Computation of Optimal Transport via Entropy-Regularized Extragradient Methods [75.34939761152587]
Efficient computation of the optimal transport distance between two distributions serves as an algorithm that empowers various applications.
This paper develops a scalable first-order optimization-based method that computes optimal transport to within $varepsilon$ additive accuracy.
arXiv Detail & Related papers (2023-01-30T15:46:39Z) - Prasatul Matrix: A Direct Comparison Approach for Analyzing Evolutionary
Optimization Algorithms [2.1320960069210475]
Direct comparison approach is proposed to analyze the performance of evolutionary optimization algorithms.
Five different performance measures are designed based on the prasatul matrix to evaluate the performance of algorithms.
arXiv Detail & Related papers (2022-12-01T17:21:44Z) - An Accelerated Variance-Reduced Conditional Gradient Sliding Algorithm
for First-order and Zeroth-order Optimization [111.24899593052851]
Conditional gradient algorithm (also known as the Frank-Wolfe algorithm) has recently regained popularity in the machine learning community.
ARCS is the first zeroth-order conditional gradient sliding type algorithms solving convex problems in zeroth-order optimization.
In first-order optimization, the convergence results of ARCS substantially outperform previous algorithms in terms of the number of gradient query oracle.
arXiv Detail & Related papers (2021-09-18T07:08:11Z) - Provably Faster Algorithms for Bilevel Optimization [54.83583213812667]
Bilevel optimization has been widely applied in many important machine learning applications.
We propose two new algorithms for bilevel optimization.
We show that both algorithms achieve the complexity of $mathcalO(epsilon-1.5)$, which outperforms all existing algorithms by the order of magnitude.
arXiv Detail & Related papers (2021-06-08T21:05:30Z) - Dynamic Cat Swarm Optimization Algorithm for Backboard Wiring Problem [0.9990687944474739]
This paper presents a powerful swarm intelligence meta-heuristic optimization algorithm called Dynamic Cat Swarm Optimization.
The proposed algorithm suggests a new method to provide a proper balance between these phases by modifying the selection scheme and the seeking mode of the algorithm.
optimization results show the effectiveness of the proposed algorithm, which ranks first compared to several well-known algorithms available in the literature.
arXiv Detail & Related papers (2021-04-27T19:41:27Z) - Optimizing Optimizers: Regret-optimal gradient descent algorithms [9.89901717499058]
We study the existence, uniqueness and consistency of regret-optimal algorithms.
By providing first-order optimality conditions for the control problem, we show that regret-optimal algorithms must satisfy a specific structure in their dynamics.
We present fast numerical methods for approximating them, generating optimization algorithms which directly optimize their long-term regret.
arXiv Detail & Related papers (2020-12-31T19:13:53Z) - Adaptive and Universal Algorithms for Variational Inequalities with
Optimal Convergence [29.189409618561964]
We develop new adaptive algorithms for variational inequalities with monotone operators.
Our algorithms automatically adapt to unknown problem parameters.
We show that our algorithms are universal and simultaneously achieve the optimal convergence rates.
arXiv Detail & Related papers (2020-10-15T14:44:26Z) - Sequential Subspace Search for Functional Bayesian Optimization
Incorporating Experimenter Intuition [63.011641517977644]
Our algorithm generates a sequence of finite-dimensional random subspaces of functional space spanned by a set of draws from the experimenter's Gaussian Process.
Standard Bayesian optimisation is applied on each subspace, and the best solution found used as a starting point (origin) for the next subspace.
We test our algorithm in simulated and real-world experiments, namely blind function matching, finding the optimal precipitation-strengthening function for an aluminium alloy, and learning rate schedule optimisation for deep networks.
arXiv Detail & Related papers (2020-09-08T06:54:11Z) - Benchmarking Meta-heuristic Optimization [0.0]
Many meta-heuristic algorithms are very efficient when solving nonlinear functions.
A meta-heuristic algorithm is a problem-independent technique that can be applied to a broad range of problems.
arXiv Detail & Related papers (2020-07-27T12:25:31Z) - Adaptivity of Stochastic Gradient Methods for Nonconvex Optimization [71.03797261151605]
Adaptivity is an important yet under-studied property in modern optimization theory.
Our algorithm is proved to achieve the best-available convergence for non-PL objectives simultaneously while outperforming existing algorithms for PL objectives.
arXiv Detail & Related papers (2020-02-13T05:42:27Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.