Natural Gradient Optimization for Optical Quantum Circuits
- URL: http://arxiv.org/abs/2106.13660v4
- Date: Tue, 26 Oct 2021 09:29:09 GMT
- Title: Natural Gradient Optimization for Optical Quantum Circuits
- Authors: Yuan Yao, Pierre Cussenot, Richard A. Wolf, and Filippo M. Miatto
- Abstract summary: We implement Natural Gradient descent in the optical quantum circuit setting.
In particular, we adapt the Natural Gradient approach to a complex-valued parameter space.
We observe that the NG approach has a faster convergence.
- Score: 4.645254587634926
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Optical quantum circuits can be optimized using gradient descent methods, as
the gates in a circuit can be parametrized by continuous parameters. However,
the parameter space as seen by the cost function is not Euclidean, which means
that the Euclidean gradient does not generally point in the direction of
steepest ascent. In order to retrieve the steepest ascent direction, in this
work we implement Natural Gradient descent in the optical quantum circuit
setting, which takes the local metric tensor into account. In particular, we
adapt the Natural Gradient approach to a complex-valued parameter space. We
then compare the Natural Gradient approach to vanilla gradient descent and to
Adam over two state preparation tasks: a single-photon source and a
Gottesman-Kitaev-Preskill state source. We observe that the NG approach has a
faster convergence (due in part to the possibility of using larger learning
rates) and a significantly smoother decay of the cost function throughout the
optimization.
Related papers
- Random coordinate descent: a simple alternative for optimizing parameterized quantum circuits [4.112419132722306]
This paper introduces a random coordinate descent algorithm as a practical and easy-to-implement alternative to the full gradient descent algorithm.
Motivated by the behavior of measurement noise in the practical optimization of parameterized quantum circuits, this paper presents an optimization problem setting amenable to analysis.
arXiv Detail & Related papers (2023-10-31T18:55:45Z) - Neural Gradient Learning and Optimization for Oriented Point Normal
Estimation [53.611206368815125]
We propose a deep learning approach to learn gradient vectors with consistent orientation from 3D point clouds for normal estimation.
We learn an angular distance field based on local plane geometry to refine the coarse gradient vectors.
Our method efficiently conducts global gradient approximation while achieving better accuracy and ability generalization of local feature description.
arXiv Detail & Related papers (2023-09-17T08:35:11Z) - Parsimonious Optimisation of Parameters in Variational Quantum Circuits [1.303764728768944]
We propose a novel Quantum-Gradient Sampling that requires the execution of at most two circuits per iteration to update the optimisable parameters.
Our proposed method achieves similar convergence rates to classical gradient descent, and empirically outperforms gradient coordinate descent, and SPSA.
arXiv Detail & Related papers (2023-06-20T18:50:18Z) - Stochastic Marginal Likelihood Gradients using Neural Tangent Kernels [78.6096486885658]
We introduce lower bounds to the linearized Laplace approximation of the marginal likelihood.
These bounds are amenable togradient-based optimization and allow to trade off estimation accuracy against computational complexity.
arXiv Detail & Related papers (2023-06-06T19:02:57Z) - Achieving High Accuracy with PINNs via Energy Natural Gradients [0.0]
We show that the update direction in function space resulting from the energy natural gradient corresponds to the Newton direction modulo an projection onto the model's tangent space.
We demonstrate experimentally that energy natural gradient descent yields highly accurate solutions with errors several orders of magnitude smaller than what is obtained when training PINNs with standard gradient descent or Adam.
arXiv Detail & Related papers (2023-02-25T21:17:19Z) - Optimization using Parallel Gradient Evaluations on Multiple Parameters [51.64614793990665]
We propose a first-order method for convex optimization, where gradients from multiple parameters can be used during each step of gradient descent.
Our method uses gradients from multiple parameters in synergy to update these parameters together towards the optima.
arXiv Detail & Related papers (2023-02-06T23:39:13Z) - Zeroth-Order Hybrid Gradient Descent: Towards A Principled Black-Box
Optimization Framework [100.36569795440889]
This work is on the iteration of zero-th-order (ZO) optimization which does not require first-order information.
We show that with a graceful design in coordinate importance sampling, the proposed ZO optimization method is efficient both in terms of complexity as well as as function query cost.
arXiv Detail & Related papers (2020-12-21T17:29:58Z) - Natural Evolutionary Strategies for Variational Quantum Computation [0.7874708385247353]
Natural evolutionary strategies (NES) are a family of gradient-free black-box optimization algorithms.
This study illustrates their use for the optimization of randomly-d parametrized quantum circuits (PQCs) in the region of vanishing gradients.
arXiv Detail & Related papers (2020-11-30T21:23:38Z) - Self-Tuning Stochastic Optimization with Curvature-Aware Gradient
Filtering [53.523517926927894]
We explore the use of exact per-sample Hessian-vector products and gradients to construct self-tuning quadratics.
We prove that our model-based procedure converges in noisy gradient setting.
This is an interesting step for constructing self-tuning quadratics.
arXiv Detail & Related papers (2020-11-09T22:07:30Z) - Channel-Directed Gradients for Optimization of Convolutional Neural
Networks [50.34913837546743]
We introduce optimization methods for convolutional neural networks that can be used to improve existing gradient-based optimization in terms of generalization error.
We show that defining the gradients along the output channel direction leads to a performance boost, while other directions can be detrimental.
arXiv Detail & Related papers (2020-08-25T00:44:09Z) - Large gradients via correlation in random parameterized quantum circuits [0.0]
The presence of exponentially vanishing gradients in cost function landscapes is an obstacle to optimization by gradient descent methods.
We prove that reducing the dimensionality of the parameter space can allow one to circumvent the vanishing gradient phenomenon.
arXiv Detail & Related papers (2020-05-25T16:15:53Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.