Solving the $KP$ problem with the Global Cartan Decomposition
- URL: http://arxiv.org/abs/2404.02358v1
- Date: Tue, 2 Apr 2024 23:03:16 GMT
- Title: Solving the $KP$ problem with the Global Cartan Decomposition
- Authors: Elija Perrier, Christopher S. Jackson,
- Abstract summary: We propose a new method for generating time-optimal unitaries for targets $-iX in [frakp,frakp] subset frakk$ with controls $-iH(t) in frakp$.
We show that the assumption of $dTheta$ equates to the corresponding time-optimal unitary control problem being able to be solved analytically using variational techniques.
- Score: 0.5524804393257919
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Geometric methods have useful application for solving problems in a range of quantum information disciplines, including the synthesis of time-optimal unitaries in quantum control. In particular, the use of Cartan decompositions to solve problems in optimal control, especially lambda systems, has given rise to a range of techniques for solving the so-called $KP$-problem, where target unitaries belong to a semi-simple Lie group manifold $G$ whose Lie algebra admits a $\mathfrak{g}=\mathfrak{k} \oplus \mathfrak{p}$ decomposition and time-optimal solutions are represented by subRiemannian geodesics synthesised via a distribution of generators in $\mathfrak{p}$. In this paper, we propose a new method utilising global Cartan decompositions $G=KAK$ of symmetric spaces $G/K$ for generating time-optimal unitaries for targets $-iX \in [\frak{p},\frak{p}] \subset \frak{k}$ with controls $-iH(t) \in \frak{p}$. Target unitaries are parametrised as $U=kac$ where $k,c \in K$ and $a = e^{i\Theta}$ with $\Theta \in \frak{a}$. We show that the assumption of $d\Theta=0$ equates to the corresponding time-optimal unitary control problem being able to be solved analytically using variational techniques. We identify how such control problems correspond to the holonomies of a compact globally Riemannian symmetric space, where local translations are generated by $\mathfrak{p}$ and local rotations are generated by $[\mathfrak{p},\mathfrak{p}]$.
Related papers
- On Partially Unitary Learning [0.0]
An optimal mapping between Hilbert spaces $IN$ of $left|psirightrangle$ and $OUT$ of $left|phirightrangle$ is presented.
An algorithm finding the global maximum of this optimization problem is developed and it's application to a number of problems is demonstrated.
arXiv Detail & Related papers (2024-05-16T17:13:55Z) - Nearly Optimal Regret for Decentralized Online Convex Optimization [53.433398074919]
Decentralized online convex optimization (D-OCO) aims to minimize a sequence of global loss functions using only local computations and communications.
We develop novel D-OCO algorithms that can respectively reduce the regret bounds for convex and strongly convex functions.
Our algorithms are nearly optimal in terms of $T$, $n$, and $rho$.
arXiv Detail & Related papers (2024-02-14T13:44:16Z) - Provably learning a multi-head attention layer [55.2904547651831]
Multi-head attention layer is one of the key components of the transformer architecture that sets it apart from traditional feed-forward models.
In this work, we initiate the study of provably learning a multi-head attention layer from random examples.
We prove computational lower bounds showing that in the worst case, exponential dependence on $m$ is unavoidable.
arXiv Detail & Related papers (2024-02-06T15:39:09Z) - Efficiently Learning One-Hidden-Layer ReLU Networks via Schur
Polynomials [50.90125395570797]
We study the problem of PAC learning a linear combination of $k$ ReLU activations under the standard Gaussian distribution on $mathbbRd$ with respect to the square loss.
Our main result is an efficient algorithm for this learning task with sample and computational complexity $(dk/epsilon)O(k)$, whereepsilon>0$ is the target accuracy.
arXiv Detail & Related papers (2023-07-24T14:37:22Z) - Accelerated Methods for Riemannian Min-Max Optimization Ensuring Bounded
Geometric Penalties [21.141544548229774]
We study the form $min_x max_y f(x, y) where $mathcalN$ are Hadamard.
We show global interest accelerated by reducing gradient convergence constants.
arXiv Detail & Related papers (2023-05-25T15:43:07Z) - Convergence of a Normal Map-based Prox-SGD Method under the KL
Inequality [0.0]
We present a novel map-based algorithm ($mathsfnorMtext-mathsfSGD$) for $symbol$k$ convergence problems.
arXiv Detail & Related papers (2023-05-10T01:12:11Z) - Learning a Single Neuron with Adversarial Label Noise via Gradient
Descent [50.659479930171585]
We study a function of the form $mathbfxmapstosigma(mathbfwcdotmathbfx)$ for monotone activations.
The goal of the learner is to output a hypothesis vector $mathbfw$ that $F(mathbbw)=C, epsilon$ with high probability.
arXiv Detail & Related papers (2022-06-17T17:55:43Z) - Local approximation of operators [0.0]
We study the problem of determining the degree of approximation of a non-linear operator between metric spaces $mathfrakX$ and $mathfrakY$.
We establish constructive methods to do this efficiently, i.e., with the constants involved in the estimates on the approximation on $mathbbSd$ being $mathcalO(d1/6)$.
arXiv Detail & Related papers (2022-02-13T19:28:34Z) - Metric Hypertransformers are Universal Adapted Maps [4.83420384410068]
metric hypertransformers (MHTs) are capable of approxing any adapted map $F:mathscrXmathbbZrightarrow mathscrYmathbbZ$ with approximable complexity.
Our results provide the first (quantimat) universal approximation theorem compatible with any such $mathscrX$ and $mathscrY$.
arXiv Detail & Related papers (2022-01-31T10:03:46Z) - Threshold Phenomena in Learning Halfspaces with Massart Noise [56.01192577666607]
We study the problem of PAC learning halfspaces on $mathbbRd$ with Massart noise under Gaussian marginals.
Our results qualitatively characterize the complexity of learning halfspaces in the Massart model.
arXiv Detail & Related papers (2021-08-19T16:16:48Z) - Small Covers for Near-Zero Sets of Polynomials and Learning Latent
Variable Models [56.98280399449707]
We show that there exists an $epsilon$-cover for $S$ of cardinality $M = (k/epsilon)O_d(k1/d)$.
Building on our structural result, we obtain significantly improved learning algorithms for several fundamental high-dimensional probabilistic models hidden variables.
arXiv Detail & Related papers (2020-12-14T18:14:08Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.