Probabilistic unitary synthesis with optimal accuracy
- URL: http://arxiv.org/abs/2301.06307v2
- Date: Thu, 2 May 2024 08:30:47 GMT
- Title: Probabilistic unitary synthesis with optimal accuracy
- Authors: Seiseki Akibue, Go Kato, Seiichiro Tani,
- Abstract summary: unitary synthesis is to find a gate sequence that optimally approximates a target unitary transformation.
We show that the optimality of current probabilistic synthesis algorithms is unknown.
We construct an efficient probabilistic synthesis algorithm for single-qubit unitaries, rigorously estimate its time complexity, and show that it reduces the approximation error quadratically compared with deterministic algorithms.
- Score: 1.0923877073891446
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: The purpose of unitary synthesis is to find a gate sequence that optimally approximates a target unitary transformation. A new synthesis approach, called probabilistic synthesis, has been introduced, and its superiority has been demonstrated over traditional deterministic approaches with respect to approximation error and gate length. However, the optimality of current probabilistic synthesis algorithms is unknown. We obtain the tight lower bound on the approximation error obtained by the optimal probabilistic synthesis, which guarantees the sub-optimality of current algorithms. We also show its tight upper bound, which improves and unifies current upper bounds depending on the class of target unitaries. These two bounds reveal the fundamental relationship of approximation error between probabilistic approximation and deterministic approximation of unitary transformations. From a computational point of view, we show that the optimal probability distribution can be computed by the semidefinite program (SDP) we construct. We also construct an efficient probabilistic synthesis algorithm for single-qubit unitaries, rigorously estimate its time complexity, and show that it reduces the approximation error quadratically compared with deterministic algorithms.
Related papers
- A Generalization Result for Convergence in Learning-to-Optimize [4.112909937203119]
Conventional convergence guarantees in optimization are based on geometric arguments, which cannot be applied to algorithms.
We are the first to prove the best of our knowledge, we are the first to prove the best of our knowledge, we are the first to prove the best of our knowledge, we are the first to prove the best of our knowledge, we are the first to prove the best of our knowledge, we are the first to prove the best of our knowledge, we are the first to prove the best of our knowledge, we are the first to prove the best of our knowledge, we are the first to prove the best of our
arXiv Detail & Related papers (2024-10-10T08:17:04Z) - Stochastic Optimization for Non-convex Problem with Inexact Hessian
Matrix, Gradient, and Function [99.31457740916815]
Trust-region (TR) and adaptive regularization using cubics have proven to have some very appealing theoretical properties.
We show that TR and ARC methods can simultaneously provide inexact computations of the Hessian, gradient, and function values.
arXiv Detail & Related papers (2023-10-18T10:29:58Z) - Sample Complexity for Quadratic Bandits: Hessian Dependent Bounds and
Optimal Algorithms [64.10576998630981]
We show the first tight characterization of the optimal Hessian-dependent sample complexity.
A Hessian-independent algorithm universally achieves the optimal sample complexities for all Hessian instances.
The optimal sample complexities achieved by our algorithm remain valid for heavy-tailed noise distributions.
arXiv Detail & Related papers (2023-06-21T17:03:22Z) - Learning Unnormalized Statistical Models via Compositional Optimization [73.30514599338407]
Noise-contrastive estimation(NCE) has been proposed by formulating the objective as the logistic loss of the real data and the artificial noise.
In this paper, we study it a direct approach for optimizing the negative log-likelihood of unnormalized models.
arXiv Detail & Related papers (2023-06-13T01:18:16Z) - Probabilistic state synthesis based on optimal convex approximation [1.2277343096128712]
We show that the optimal probabilistic synthesis quadratically reduces the approximation error.
We also numerically demonstrate how this conversion halves an information-theoretic lower bound on the circuit size.
arXiv Detail & Related papers (2023-03-20T04:43:21Z) - Fast Computation of Optimal Transport via Entropy-Regularized Extragradient Methods [75.34939761152587]
Efficient computation of the optimal transport distance between two distributions serves as an algorithm that empowers various applications.
This paper develops a scalable first-order optimization-based method that computes optimal transport to within $varepsilon$ additive accuracy.
arXiv Detail & Related papers (2023-01-30T15:46:39Z) - Selection of the Most Probable Best [2.1095005405219815]
We consider an expected-value ranking and selection (R&S) problem where all k solutions' simulation outputs depend on a common parameter whose uncertainty can be modeled by a distribution.
We define the most probable best (MPB) to be the solution that has the largest probability of being optimal with respect to the distribution.
We devise a series of algorithms that replace the unknown means in the optimality conditions with their estimates and prove the algorithms' sampling ratios achieve the conditions as the simulation budget increases.
arXiv Detail & Related papers (2022-07-15T15:27:27Z) - Sharper Bounds for Proximal Gradient Algorithms with Errors [6.901159341430919]
We analyse the convergence of the proximal gradient algorithm for convex composite problems in the presence of gradient and proximal computational inaccuracies.
We derive new tighter deterministic and probabilistic bounds that we use to verify a simulated (MPC) and a synthetic (LASSO) optimization problems solved on a reduced-precision machine.
arXiv Detail & Related papers (2022-03-04T09:27:08Z) - High Probability Complexity Bounds for Non-Smooth Stochastic Optimization with Heavy-Tailed Noise [51.31435087414348]
It is essential to theoretically guarantee that algorithms provide small objective residual with high probability.
Existing methods for non-smooth convex optimization have complexity bounds with dependence on confidence level.
We propose novel stepsize rules for two methods with gradient clipping.
arXiv Detail & Related papers (2021-06-10T17:54:21Z) - Amortized Conditional Normalized Maximum Likelihood: Reliable Out of
Distribution Uncertainty Estimation [99.92568326314667]
We propose the amortized conditional normalized maximum likelihood (ACNML) method as a scalable general-purpose approach for uncertainty estimation.
Our algorithm builds on the conditional normalized maximum likelihood (CNML) coding scheme, which has minimax optimal properties according to the minimum description length principle.
We demonstrate that ACNML compares favorably to a number of prior techniques for uncertainty estimation in terms of calibration on out-of-distribution inputs.
arXiv Detail & Related papers (2020-11-05T08:04:34Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.