Avoiding Barren Plateaus with Classical Deep Neural Networks
- URL: http://arxiv.org/abs/2205.13418v1
- Date: Thu, 26 May 2022 15:14:01 GMT
- Title: Avoiding Barren Plateaus with Classical Deep Neural Networks
- Authors: Lucas Friedrich and Jonas Maziero
- Abstract summary: Vari quantum algorithms (VQAs) are among the most promising algorithms in the era of Noisy Intermediate Scale Quantum Devices.
VQAs are applied to a variety of tasks, such as in chemistry simulations, optimization problems, and quantum neural networks.
We report on how the use of a classical neural networks in the VQAs input parameters can alleviate the Barren Plateaus phenomenon.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Variational quantum algorithms (VQAs) are among the most promising algorithms
in the era of Noisy Intermediate Scale Quantum Devices. The VQAs are applied to
a variety of tasks, such as in chemistry simulations, optimization problems,
and quantum neural networks. Such algorithms are constructed using a
parameterization U($\pmb{\theta}$) with a classical optimizer that updates the
parameters $\pmb{\theta}$ in order to minimize a cost function $C$. For this
task, in general the gradient descent method, or one of its variants, is used.
This is a method where the circuit parameters are updated iteratively using the
cost function gradient. However, several works in the literature have shown
that this method suffers from a phenomenon known as the Barren Plateaus (BP).
This phenomenon is characterized by the exponentially flattening of the cost
function landscape, so that the number of times the function must be evaluated
to perform the optimization grows exponentially as the number of qubits and
parameterization depth increase. In this article, we report on how the use of a
classical neural networks in the VQAs input parameters can alleviate the BP
phenomenon.
Related papers
- Trainability Barriers in Low-Depth QAOA Landscapes [0.0]
Quantum Alternating Operator Ansatz (QAOA) is a prominent variational quantum algorithm for solving optimization problems.
Previous results have given analytical performance guarantees for a small, fixed number of parameters.
We study the difficulty of training in the intermediate regime, which is the focus of most current numerical studies.
arXiv Detail & Related papers (2024-02-15T18:45:30Z) - Parsimonious Optimisation of Parameters in Variational Quantum Circuits [1.303764728768944]
We propose a novel Quantum-Gradient Sampling that requires the execution of at most two circuits per iteration to update the optimisable parameters.
Our proposed method achieves similar convergence rates to classical gradient descent, and empirically outperforms gradient coordinate descent, and SPSA.
arXiv Detail & Related papers (2023-06-20T18:50:18Z) - Globally Optimal Training of Neural Networks with Threshold Activation
Functions [63.03759813952481]
We study weight decay regularized training problems of deep neural networks with threshold activations.
We derive a simplified convex optimization formulation when the dataset can be shattered at a certain layer of the network.
arXiv Detail & Related papers (2023-03-06T18:59:13Z) - Sample-Then-Optimize Batch Neural Thompson Sampling [50.800944138278474]
We introduce two algorithms for black-box optimization based on the Thompson sampling (TS) policy.
To choose an input query, we only need to train an NN and then choose the query by maximizing the trained NN.
Our algorithms sidestep the need to invert the large parameter matrix yet still preserve the validity of the TS policy.
arXiv Detail & Related papers (2022-10-13T09:01:58Z) - Neural Basis Functions for Accelerating Solutions to High Mach Euler
Equations [63.8376359764052]
We propose an approach to solving partial differential equations (PDEs) using a set of neural networks.
We regress a set of neural networks onto a reduced order Proper Orthogonal Decomposition (POD) basis.
These networks are then used in combination with a branch network that ingests the parameters of the prescribed PDE to compute a reduced order approximation to the PDE.
arXiv Detail & Related papers (2022-08-02T18:27:13Z) - Natural evolutionary strategies applied to quantum-classical hybrid
neural networks [0.0]
We study an alternative method, called Natural Evolutionary Strategies (NES), which are a family of black box optimization algorithms.
We apply the NES method to the binary classification task, showing that this method is a viable alternative for training quantum neural networks.
arXiv Detail & Related papers (2022-05-17T02:14:44Z) - LAWS: Look Around and Warm-Start Natural Gradient Descent for Quantum
Neural Networks [11.844238544360149]
Vari quantum algorithms (VQAs) have recently received significant attention due to their promising performance in Noisy Intermediate-Scale Quantum computers (NISQ)
VQAs run on parameterized quantum circuits (PQC) with randomlyational parameters are characterized by barren plateaus (BP) where the gradient vanishes exponentially in the number of qubits.
In this paper, we first quantum natural gradient (QNG), which is one of the most popular algorithms used in VQA, from the classical first-order point of optimization.
Then, we proposed a underlineAround underline
arXiv Detail & Related papers (2022-05-05T14:16:40Z) - Twisted hybrid algorithms for combinatorial optimization [68.8204255655161]
Proposed hybrid algorithms encode a cost function into a problem Hamiltonian and optimize its energy by varying over a set of states with low circuit complexity.
We show that for levels $p=2,ldots, 6$, the level $p$ can be reduced by one while roughly maintaining the expected approximation ratio.
arXiv Detail & Related papers (2022-03-01T19:47:16Z) - Quantum algorithms for approximate function loading [0.0]
We introduce two approximate quantum-state preparation methods for the NISQ era inspired by the Grover-Rudolph algorithm.
A variational algorithm capable of loading functions beyond the aforementioned smoothness conditions is proposed.
arXiv Detail & Related papers (2021-11-15T17:36:13Z) - STORM+: Fully Adaptive SGD with Momentum for Nonconvex Optimization [74.1615979057429]
We investigate non-batch optimization problems where the objective is an expectation over smooth loss functions.
Our work builds on the STORM algorithm, in conjunction with a novel approach to adaptively set the learning rate and momentum parameters.
arXiv Detail & Related papers (2021-11-01T15:43:36Z) - Adaptive pruning-based optimization of parameterized quantum circuits [62.997667081978825]
Variisy hybrid quantum-classical algorithms are powerful tools to maximize the use of Noisy Intermediate Scale Quantum devices.
We propose a strategy for such ansatze used in variational quantum algorithms, which we call "Efficient Circuit Training" (PECT)
Instead of optimizing all of the ansatz parameters at once, PECT launches a sequence of variational algorithms.
arXiv Detail & Related papers (2020-10-01T18:14:11Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.