One other parameterization of SU(4) group
- URL: http://arxiv.org/abs/2408.14888v1
- Date: Tue, 27 Aug 2024 09:03:14 GMT
- Title: One other parameterization of SU(4) group
- Authors: Arsen Khvedelidze, Dimitar Mladenov, Astghik Torosyan,
- Abstract summary: decomposition of Lie $mathfraksu(4)$ algebra into the direct sum of subspaces.
$ with $mathfrakk=mathfraksu(2)oplusmathfraksu(2)$ and a triplet of 3-dimensional Abelian subalgebras.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: We propose a special decomposition of the Lie $\mathfrak{su}(4)$ algebra into the direct sum of orthogonal subspaces, $\mathfrak{su}(4)=\mathfrak{k}\oplus\mathfrak{a}\oplus\mathfrak{a}^\prime\oplus\mathfrak{t}\,,$ with $\mathfrak{k}=\mathfrak{su}(2)\oplus\mathfrak{su}(2)$ and a triplet of 3-dimensional Abelian subalgebras $(\mathfrak{a}, \mathfrak{a}^{\prime}, \mathfrak{t})\,,$ such that the exponential mapping of a neighbourhood of the $0\in \mathfrak{su}(4)$ into a neighbourhood of the identity of the Lie group provides the following factorization of an element of $SU(4)$ \[ g = k\,a\,t\,, \] where $k \in \exp{(\mathfrak{k})} = SU(2)\times SU(2) \subset SU(4)\,,$ the diagonal matrix $t$ stands for an element from the maximal torus $T^3=\exp{(\mathfrak{t})},$ and the factor $a=\exp{(\mathfrak{a})}\exp{(\mathfrak{a}^\prime)}$ corresponds to a point in the double coset $SU(2)\times SU(2)\backslash SU(4)/T^3.$ Analyzing the uniqueness of the inverse of the above exponential mappings, we establish a logarithmic coordinate chart of the $SU(4)$ group manifold comprising 6 coordinates on the embedded manifold $ SU(2)\times SU(2) \subset SU(4)$ and 9 coordinates on three copies of the regular octahedron with the edge length $2\pi\sqrt{2}\,$.
Related papers
- Towards parameterizing the entanglement body of a qubit pair [0.0]
Method is based on the construction of coordinates on a generic section of 2-qubit's entanglement space.
Subset $mathcalSE_2times2 subsetmathcalE_2times2$ corresponding to rank-4 2-qubit states is described.
arXiv Detail & Related papers (2024-11-26T17:32:43Z) - The Communication Complexity of Approximating Matrix Rank [50.6867896228563]
We show that this problem has randomized communication complexity $Omega(frac1kcdot n2log|mathbbF|)$.
As an application, we obtain an $Omega(frac1kcdot n2log|mathbbF|)$ space lower bound for any streaming algorithm with $k$ passes.
arXiv Detail & Related papers (2024-10-26T06:21:42Z) - Efficient Continual Finite-Sum Minimization [52.5238287567572]
We propose a key twist into the finite-sum minimization, dubbed as continual finite-sum minimization.
Our approach significantly improves upon the $mathcalO(n/epsilon)$ FOs that $mathrmStochasticGradientDescent$ requires.
We also prove that there is no natural first-order method with $mathcalOleft(n/epsilonalpharight)$ complexity gradient for $alpha 1/4$, establishing that the first-order complexity of our method is nearly tight.
arXiv Detail & Related papers (2024-06-07T08:26:31Z) - Locality Regularized Reconstruction: Structured Sparsity and Delaunay Triangulations [7.148312060227714]
Linear representation learning is widely studied due to its conceptual simplicity and empirical utility in tasks such as compression, classification, and feature extraction.
In this work we seek $mathbfw$ that forms a local reconstruction of $mathbfy$ by solving a regularized least squares regression problem.
We prove that, for all levels of regularization and under a mild condition that the columns of $mathbfX$ have a unique Delaunay triangulation, the optimal coefficients' number of non-zero entries is upper bounded by $d+1$.
arXiv Detail & Related papers (2024-05-01T19:56:52Z) - Provably learning a multi-head attention layer [55.2904547651831]
Multi-head attention layer is one of the key components of the transformer architecture that sets it apart from traditional feed-forward models.
In this work, we initiate the study of provably learning a multi-head attention layer from random examples.
We prove computational lower bounds showing that in the worst case, exponential dependence on $m$ is unavoidable.
arXiv Detail & Related papers (2024-02-06T15:39:09Z) - Completely Bounded Norms of $k$-positive Maps [41.78224056793453]
Given an operator system $mathcalS$, we define the parameters $r_k(mathcalS)$ (resp. $d_k(mathcalS)$)
We show that the sequence $(r_k( mathcalS))$ tends to $1$ if and only if $mathcalS$ is exact and that the sequence $(d_k(mathcalS))$ tends to $1$ if and only if $mathcalS$ has the lifting
arXiv Detail & Related papers (2024-01-22T20:37:14Z) - Increasing subsequences, matrix loci, and Viennot shadows [0.0]
We show that the quotient $mathbbF[mathbfx_n times n]/I_n$ admits a standard monomial basis.
We also calculate the structure of $mathbbF[mathbfx_n times n]/I_n$ as a graded $mathfrakS_n times mathfrakS_n$-module.
arXiv Detail & Related papers (2023-06-14T19:48:01Z) - Learning a Single Neuron with Adversarial Label Noise via Gradient
Descent [50.659479930171585]
We study a function of the form $mathbfxmapstosigma(mathbfwcdotmathbfx)$ for monotone activations.
The goal of the learner is to output a hypothesis vector $mathbfw$ that $F(mathbbw)=C, epsilon$ with high probability.
arXiv Detail & Related papers (2022-06-17T17:55:43Z) - Low-Rank Approximation with $1/\epsilon^{1/3}$ Matrix-Vector Products [58.05771390012827]
We study iterative methods based on Krylov subspaces for low-rank approximation under any Schatten-$p$ norm.
Our main result is an algorithm that uses only $tildeO(k/sqrtepsilon)$ matrix-vector products.
arXiv Detail & Related papers (2022-02-10T16:10:41Z) - Topological entanglement and hyperbolic volume [1.1909611351044664]
Chern-Simons theory provides setting to visualise the $m$-moment of reduced density matrix as a three-manifold invariant $Z(M_mathcalK_m)$.
For SU(2) group, we show that $Z(M_mathcalK_m)$ can grow at mostly in $k$.
We conjecture that $ln Z(M_mathcalK_m)$ is the hyperbolic volume of the knot complement $S3backslash mathcalK_m
arXiv Detail & Related papers (2021-06-07T07:51:03Z) - Linear Bandits on Uniformly Convex Sets [88.3673525964507]
Linear bandit algorithms yield $tildemathcalO(nsqrtT)$ pseudo-regret bounds on compact convex action sets.
Two types of structural assumptions lead to better pseudo-regret bounds.
arXiv Detail & Related papers (2021-03-10T07:33:03Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.