Using non-convex optimization in quantum process tomography: Factored
gradient descent is tough to beat
- URL: http://arxiv.org/abs/2312.01311v1
- Date: Sun, 3 Dec 2023 07:44:17 GMT
- Title: Using non-convex optimization in quantum process tomography: Factored
gradient descent is tough to beat
- Authors: David A. Quiroga, Anastasios Kyrillidis
- Abstract summary: Our algorithm converges faster and achieves higher fidelities than state the art, both in terms of settings and noise tolerance.
We find our algorithm converges faster and achieves higher fidelities than state the art, both in terms of settings and noise tolerance.
- Score: 11.893324664457552
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We propose a non-convex optimization algorithm, based on the Burer-Monteiro
(BM) factorization, for the quantum process tomography problem, in order to
estimate a low-rank process matrix $\chi$ for near-unitary quantum gates. In
this work, we compare our approach against state of the art convex optimization
approaches based on gradient descent. We use a reduced set of initial states
and measurement operators that require $2 \cdot 8^n$ circuit settings, as well
as $\mathcal{O}(4^n)$ measurements for an underdetermined setting. We find our
algorithm converges faster and achieves higher fidelities than state of the
art, both in terms of measurement settings, as well as in terms of noise
tolerance, in the cases of depolarizing and Gaussian noise models.
Related papers
- Improving Quantum Approximate Optimization by Noise-Directed Adaptive Remapping [3.47862118034022]
Noise-Directed Remapping (NDAR) is a algorithm for approximately solving binary optimization problems by leveraging certain types of noise.
We consider access to a noisy quantum processor with dynamics that features a global attractor state.
Our algorithm bootstraps the noise attractor state by iteratively gauge-transforming the cost-function Hamiltonian in a way that transforms the noise attractor into higher-quality solutions.
arXiv Detail & Related papers (2024-04-01T18:28:57Z) - Variational quantum algorithm for enhanced continuous variable optical
phase sensing [0.0]
Variational quantum algorithms (VQAs) are hybrid quantum-classical approaches used for tackling a wide range of problems on noisy quantum devices.
We implement a variational algorithm designed for optimized parameter estimation on a continuous variable platform based on squeezed light.
arXiv Detail & Related papers (2023-12-21T14:11:05Z) - Random coordinate descent: a simple alternative for optimizing parameterized quantum circuits [4.112419132722306]
This paper introduces a random coordinate descent algorithm as a practical and easy-to-implement alternative to the full gradient descent algorithm.
Motivated by the behavior of measurement noise in the practical optimization of parameterized quantum circuits, this paper presents an optimization problem setting amenable to analysis.
arXiv Detail & Related papers (2023-10-31T18:55:45Z) - First Order Methods with Markovian Noise: from Acceleration to Variational Inequalities [91.46841922915418]
We present a unified approach for the theoretical analysis of first-order variation methods.
Our approach covers both non-linear gradient and strongly Monte Carlo problems.
We provide bounds that match the oracle strongly in the case of convex method optimization problems.
arXiv Detail & Related papers (2023-05-25T11:11:31Z) - Quantum Approximate Optimization Algorithm with Cat Qubits [0.0]
We numerically simulate solving MaxCut problems using QAOA with cat qubits.
We show that running QAOA with cat qubits increases the approximation ratio for random instances of MaxCut with respect to qubits encoded into two-level systems.
arXiv Detail & Related papers (2023-05-09T15:44:52Z) - Gradient-Free optimization algorithm for single-qubit quantum classifier [0.3314882635954752]
A gradient-free optimization algorithm is proposed to overcome the effects of barren plateau caused by quantum devices.
The proposed algorithm is demonstrated for a classification task and is compared with that using Adam.
The proposed gradient-free optimization algorithm can reach a high accuracy faster than that using Adam.
arXiv Detail & Related papers (2022-05-10T08:45:03Z) - Twisted hybrid algorithms for combinatorial optimization [68.8204255655161]
Proposed hybrid algorithms encode a cost function into a problem Hamiltonian and optimize its energy by varying over a set of states with low circuit complexity.
We show that for levels $p=2,ldots, 6$, the level $p$ can be reduced by one while roughly maintaining the expected approximation ratio.
arXiv Detail & Related papers (2022-03-01T19:47:16Z) - Gradient Free Minimax Optimization: Variance Reduction and Faster
Convergence [120.9336529957224]
In this paper, we denote the non-strongly setting on the magnitude of a gradient-free minimax optimization problem.
We show that a novel zeroth-order variance reduced descent algorithm achieves the best known query complexity.
arXiv Detail & Related papers (2020-06-16T17:55:46Z) - Exploiting Higher Order Smoothness in Derivative-free Optimization and
Continuous Bandits [99.70167985955352]
We study the problem of zero-order optimization of a strongly convex function.
We consider a randomized approximation of the projected gradient descent algorithm.
Our results imply that the zero-order algorithm is nearly optimal in terms of sample complexity and the problem parameters.
arXiv Detail & Related papers (2020-06-14T10:42:23Z) - Convergence of adaptive algorithms for weakly convex constrained
optimization [59.36386973876765]
We prove the $mathcaltilde O(t-1/4)$ rate of convergence for the norm of the gradient of Moreau envelope.
Our analysis works with mini-batch size of $1$, constant first and second order moment parameters, and possibly smooth optimization domains.
arXiv Detail & Related papers (2020-06-11T17:43:19Z) - Towards Better Understanding of Adaptive Gradient Algorithms in
Generative Adversarial Nets [71.05306664267832]
Adaptive algorithms perform gradient updates using the history of gradients and are ubiquitous in training deep neural networks.
In this paper we analyze a variant of OptimisticOA algorithm for nonconcave minmax problems.
Our experiments show that adaptive GAN non-adaptive gradient algorithms can be observed empirically.
arXiv Detail & Related papers (2019-12-26T22:10:10Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.