Efficient, direct compilation of SU(N) operations into SNAP &
Displacement gates
- URL: http://arxiv.org/abs/2307.11900v1
- Date: Fri, 21 Jul 2023 20:58:17 GMT
- Title: Efficient, direct compilation of SU(N) operations into SNAP &
Displacement gates
- Authors: Joshua Job
- Abstract summary: Map $Phi$ gives us the ability to compile directly any $d$-dimensional unitary into a sequence of SNAP and displacement gates.
We find numerically that the error on compiled circuits can be made arbitrarily small by breaking each rotation into $m$ $theta/m$ rotations.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We present a function which connects the parameter of a previously published
short sequence of selective number-dependent arbitrary phase (SNAP) and
displacement gates acting on a qudit encoded into the Fock states of a
superconducting cavity,
$V_k(\alpha)=D(\alpha)R_\pi(k)D(-2\alpha)R_\pi(k)D(\alpha)$ to the angle of the
Givens rotation $G(\theta)$ on levels $|k\rangle,|k+1\rangle$ that sequence
approximates, namely $\alpha=\Phi(\theta) = \frac{\theta}{4\sqrt{k+1}}$.
Previous publications left the determination of an appropriate $\alpha$ to
numerical optimization at compile time. The map $\Phi$ gives us the ability to
compile directly any $d$-dimensional unitary into a sequence of SNAP and
displacement gates in $O(d^3)$ complex floating point operations with low
constant prefactor, avoiding the need for numerical optimization. Numerical
studies demonstrate that the infidelity of the generated gate sequence $V_k$
per Givens rotation $G$ scales as approximately $O(\theta^6)$. We find
numerically that the error on compiled circuits can be made arbitrarily small
by breaking each rotation into $m$ $\theta/m$ rotations, with the full $d\times
d$ unitary infidelity scaling as approximately $O(m^{-4})$. This represents a
significant reduction in the computational effort to compile qudit unitaries
either to SNAP and displacement gates or to generate them via direct low-level
pulse optimization via optimal control.
Related papers
- Corner Gradient Descent [13.794391803767617]
We show that rates up to $O(t-2zeta)$ can be achieved by a generalized stationary SGD with infinite memory.
We show that ideal corner algorithms can be efficiently approximated by finite-memory algorithms.
arXiv Detail & Related papers (2025-04-16T22:39:41Z) - A Family of Controllable Momentum Coefficients for Forward-Backward Accelerated Algorithms [4.404496835736175]
Nesterov's accelerated gradient method (NAG) marks a pivotal advancement in gradient-based optimization.<n>Its algorithmic complexity when applied to strongly convex functions remains unknown.<n>We introduce a family of controllable momentum coefficients for forward-backward accelerated methods.
arXiv Detail & Related papers (2025-01-17T09:15:18Z) - A Proximal Modified Quasi-Newton Method for Nonsmooth Regularized Optimization [0.7373617024876725]
Lipschitz-of-$nabla f$.
$mathcalS_k|p$.
$mathcalS_k|p$.
$nabla f$.
$mathcalS_k|p$.
arXiv Detail & Related papers (2024-09-28T18:16:32Z) - Optimal Sketching for Residual Error Estimation for Matrix and Vector Norms [50.15964512954274]
We study the problem of residual error estimation for matrix and vector norms using a linear sketch.
We demonstrate that this gives a substantial advantage empirically, for roughly the same sketch size and accuracy as in previous work.
We also show an $Omega(k2/pn1-2/p)$ lower bound for the sparse recovery problem, which is tight up to a $mathrmpoly(log n)$ factor.
arXiv Detail & Related papers (2024-08-16T02:33:07Z) - Control of the von Neumann Entropy for an Open Two-Qubit System Using Coherent and Incoherent Drives [50.24983453990065]
This article is devoted to developing an approach for manipulating the von Neumann entropy $S(rho(t))$ of an open two-qubit system with coherent control and incoherent control inducing time-dependent decoherence rates.
The following goals are considered: (a) minimizing or maximizing the final entropy $S(rho(T))$; (b) steering $S(rho(T))$ to a given target value; (c) steering $S(rho(T))$ to a target value and satisfying the pointwise state constraint $S(
arXiv Detail & Related papers (2024-05-10T10:01:10Z) - Solving Dense Linear Systems Faster Than via Preconditioning [1.8854491183340518]
We show that our algorithm has an $tilde O(n2)$ when $k=O(n0.729)$.
In particular, our algorithm has an $tilde O(n2)$ when $k=O(n0.729)$.
Our main algorithm can be viewed as a randomized block coordinate descent method.
arXiv Detail & Related papers (2023-12-14T12:53:34Z) - Regret-Optimal Federated Transfer Learning for Kernel Regression with Applications in American Option Pricing [8.723136784230906]
We propose an optimal iterative scheme for federated transfer learning, where a central planner has access to datasets.
Our objective is to minimize the cumulative deviation of the generated parameters $thetai(t)_t=0T$ across all $T$ iterations.
By leveraging symmetries within the regret-optimal algorithm, we develop a nearly regret $_optimal that runs with $mathcalO(Np2)$ fewer elementary operations.
arXiv Detail & Related papers (2023-09-08T19:17:03Z) - Efficiently Learning One-Hidden-Layer ReLU Networks via Schur
Polynomials [50.90125395570797]
We study the problem of PAC learning a linear combination of $k$ ReLU activations under the standard Gaussian distribution on $mathbbRd$ with respect to the square loss.
Our main result is an efficient algorithm for this learning task with sample and computational complexity $(dk/epsilon)O(k)$, whereepsilon>0$ is the target accuracy.
arXiv Detail & Related papers (2023-07-24T14:37:22Z) - Improved Langevin Monte Carlo for stochastic optimization via landscape
modification [0.0]
Given a target function $H$ to minimize or a target Gibbs distribution $pi_beta0 propto e-beta H$ to sample from in the low temperature, in this paper we propose and analyze Langevin Monte Carlo (LMC) algorithms that run on an alternative landscape.
We show that the energy barrier of the transformed landscape is reduced which consequently leads to dependence on both $beta$ and $M$ in the modified Log-Sobolev constant associated with $pif_beta,c,1$.
arXiv Detail & Related papers (2023-02-08T10:08:37Z) - Faster Sampling from Log-Concave Distributions over Polytopes via a
Soft-Threshold Dikin Walk [28.431572772564518]
We consider the problem of sampling from a $d$-dimensional log-concave distribution $pi(theta) propto e-f(theta)$ constrained to a polytope $K$ defined by $m$ inequalities.
Our main result is a "soft-warm'' variant of the Dikin walk Markov chain that requires at most $O((md + d L2 R2) times MDomega-1) log(fracwdelta)$ arithmetic operations to sample from $pi$
arXiv Detail & Related papers (2022-06-19T11:33:07Z) - Sampling from Log-Concave Distributions with Infinity-Distance
Guarantees and Applications to Differentially Private Optimization [33.38289436686841]
We present an algorithm that outputs a point from a distributionO(varepsilon)$close to $$ in infinity-distance.
We also present a "soft-pi" version of the Dikin walk which may be independent interest.
arXiv Detail & Related papers (2021-11-07T13:44:50Z) - Private Stochastic Convex Optimization: Optimal Rates in $\ell_1$
Geometry [69.24618367447101]
Up to logarithmic factors the optimal excess population loss of any $(varepsilon,delta)$-differently private is $sqrtlog(d)/n + sqrtd/varepsilon n.$
We show that when the loss functions satisfy additional smoothness assumptions, the excess loss is upper bounded (up to logarithmic factors) by $sqrtlog(d)/n + (log(d)/varepsilon n)2/3.
arXiv Detail & Related papers (2021-03-02T06:53:44Z) - Small Covers for Near-Zero Sets of Polynomials and Learning Latent
Variable Models [56.98280399449707]
We show that there exists an $epsilon$-cover for $S$ of cardinality $M = (k/epsilon)O_d(k1/d)$.
Building on our structural result, we obtain significantly improved learning algorithms for several fundamental high-dimensional probabilistic models hidden variables.
arXiv Detail & Related papers (2020-12-14T18:14:08Z) - Convergence of Sparse Variational Inference in Gaussian Processes
Regression [29.636483122130027]
We show that a method with an overall computational cost of $mathcalO(log N)2D(loglog N)2)$ can be used to perform inference.
arXiv Detail & Related papers (2020-08-01T19:23:34Z) - Linear Time Sinkhorn Divergences using Positive Features [51.50788603386766]
Solving optimal transport with an entropic regularization requires computing a $ntimes n$ kernel matrix that is repeatedly applied to a vector.
We propose to use instead ground costs of the form $c(x,y)=-logdotpvarphi(x)varphi(y)$ where $varphi$ is a map from the ground space onto the positive orthant $RRr_+$, with $rll n$.
arXiv Detail & Related papers (2020-06-12T10:21:40Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.