Using Differential Evolution to avoid local minima in Variational
Quantum Algorithms
- URL: http://arxiv.org/abs/2303.12186v2
- Date: Fri, 29 Sep 2023 07:47:47 GMT
- Title: Using Differential Evolution to avoid local minima in Variational
Quantum Algorithms
- Authors: Daniel Fa\'ilde, Jos\'e Daniel Viqueira, Mariamo Mussa Juane, Andr\'es
G\'omez
- Abstract summary: Variational Quantum Algorithms (VQAs) are among the most promising NISQ-era algorithms for harnessing quantum computing.
Our goal in this paper is to study alternative optimization methods that can avoid or reduce the effect of local minima and barren plateau problems.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Variational Quantum Algorithms (VQAs) are among the most promising NISQ-era
algorithms for harnessing quantum computing in diverse fields. However, the
underlying optimization processes within these algorithms usually deal with
local minima and barren plateau problems, preventing them from scaling
efficiently. Our goal in this paper is to study alternative optimization
methods that can avoid or reduce the effect of these problems. To this end, we
propose to apply the Differential Evolution (DE) algorithm to VQAs
optimizations. Our hypothesis is that DE is resilient to vanishing gradients
and local minima for two main reasons: (i) it does not depend on gradients, and
(ii) its mutation and recombination schemes allow DE to continue evolving even
in these cases. To demonstrate the performance of our approach, first, we use a
robust local minima problem to compare state-of-the-art local optimizers
(SLSQP, COBYLA, L-BFGS-B and SPSA) against DE using the Variational Quantum
Eigensolver algorithm. Our results show that DE always outperforms local
optimizers. In particular, in exact simulations of a 1D Ising chain with 14
qubits, DE achieves the ground state with a 100\% success rate, while local
optimizers only exhibit around 40\%. We also show that combining DE with local
optimizers increases the accuracy of the energy estimation once avoiding local
minima. Finally, we demonstrate how our results can be extended to more complex
problems by studying DE performance in a 1D Hubbard model.
Related papers
- Qudit inspired optimization for graph coloring [0.0]
We introduce a quantum-inspired algorithm for Graph Coloring Problems (GCPs)
We use qudits in a product state, with each qudit representing a node in the graph and parameterized by d-dimensional spherical coordinates.
We benchmark two optimization strategies: qudit gradient descent (QdGD), initiating qudits in random states and employing gradient descent to minimize a cost function.
arXiv Detail & Related papers (2024-06-02T16:19:55Z) - RoPINN: Region Optimized Physics-Informed Neural Networks [66.38369833561039]
Physics-informed neural networks (PINNs) have been widely applied to solve partial differential equations (PDEs)
This paper proposes and theoretically studies a new training paradigm as region optimization.
A practical training algorithm, Region Optimized PINN (RoPINN), is seamlessly derived from this new paradigm.
arXiv Detail & Related papers (2024-05-23T09:45:57Z) - Characterization of Locality in Spin States and Forced Moves for
Optimizations [0.36868085124383626]
In optimization problems, the existence of local minima in energy landscapes is problematic to use to seek the global minimum.
We develop an algorithm to get out of the local minima efficiently while it does not yield the exact samplings.
As the proposed algorithm is based on a rejection-free algorithm, the computational cost is low.
arXiv Detail & Related papers (2023-12-05T07:21:00Z) - Accelerated First-Order Optimization under Nonlinear Constraints [73.2273449996098]
We exploit between first-order algorithms for constrained optimization and non-smooth systems to design a new class of accelerated first-order algorithms.
An important property of these algorithms is that constraints are expressed in terms of velocities instead of sparse variables.
arXiv Detail & Related papers (2023-02-01T08:50:48Z) - A Particle-based Sparse Gaussian Process Optimizer [5.672919245950197]
We present a new swarm-swarm-based framework utilizing the underlying dynamical process of descent.
The biggest advantage of this approach is greater exploration around the current state before deciding descent descent.
arXiv Detail & Related papers (2022-11-26T09:06:15Z) - Recursive greedy initialization of the quantum approximate optimization
algorithm with guaranteed improvement [1.720510639137902]
Quantum approximate optimization algorithm (QAOA) is a variational quantum algorithm, where a quantum computer implements a variational ansatz consisting of $p$ layers of alternating unitary operators.
We present an analytic construction of $2p+1$ transition states for QAOA with $p+1$ layers that use the local minimum of QAOA with $p$ layers.
arXiv Detail & Related papers (2022-09-02T16:40:21Z) - Lower Bounds and Optimal Algorithms for Smooth and Strongly Convex
Decentralized Optimization Over Time-Varying Networks [79.16773494166644]
We consider the task of minimizing the sum of smooth and strongly convex functions stored in a decentralized manner across the nodes of a communication network.
We design two optimal algorithms that attain these lower bounds.
We corroborate the theoretical efficiency of these algorithms by performing an experimental comparison with existing state-of-the-art methods.
arXiv Detail & Related papers (2021-06-08T15:54:44Z) - Improving the Quantum Approximate Optimization Algorithm with
postselection [0.0]
Combinatorial optimization is among the main applications envisioned for near-term and fault-tolerant quantum computers.
We consider a well-studied quantum algorithm for optimization: the Quantum Approximate Optimization Algorithm (QAOA) applied to the MaxCut problem on 3-regular graphs.
We derive theoretical upper and lower bounds showing that a constant (though small) increase of the fraction of satisfied edges is indeed achievable.
arXiv Detail & Related papers (2020-11-10T22:17:50Z) - Convergence of adaptive algorithms for weakly convex constrained
optimization [59.36386973876765]
We prove the $mathcaltilde O(t-1/4)$ rate of convergence for the norm of the gradient of Moreau envelope.
Our analysis works with mini-batch size of $1$, constant first and second order moment parameters, and possibly smooth optimization domains.
arXiv Detail & Related papers (2020-06-11T17:43:19Z) - Private Stochastic Convex Optimization: Optimal Rates in Linear Time [74.47681868973598]
We study the problem of minimizing the population loss given i.i.d. samples from a distribution over convex loss functions.
A recent work of Bassily et al. has established the optimal bound on the excess population loss achievable given $n$ samples.
We describe two new techniques for deriving convex optimization algorithms both achieving the optimal bound on excess loss and using $O(minn, n2/d)$ gradient computations.
arXiv Detail & Related papers (2020-05-10T19:52:03Z) - Second-Order Guarantees in Centralized, Federated and Decentralized
Nonconvex Optimization [64.26238893241322]
Simple algorithms have been shown to lead to good empirical results in many contexts.
Several works have pursued rigorous analytical justification for studying non optimization problems.
A key insight in these analyses is that perturbations play a critical role in allowing local descent algorithms.
arXiv Detail & Related papers (2020-03-31T16:54:22Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.