Symmetry reduction for testing $k$-block-positivity via extendibility
- URL: http://arxiv.org/abs/2505.22100v1
- Date: Wed, 28 May 2025 08:26:34 GMT
- Title: Symmetry reduction for testing $k$-block-positivity via extendibility
- Authors: Qian Chen, BenoƮt Collins, Omar Fawzi,
- Abstract summary: We exploit the unitary symmetry of the maximally entangled state to reduce the size of the corresponding semidefinite programs.<n>For example, for $k=2$, the SDP is reduced from one block of size $2N+1 dN+1$ to $lfloor fracN+12 rfloor$ blocks of size.
- Score: 8.247261926186477
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: We study the problem of testing $k$-block-positivity via symmetric $N$-extendibility by taking the tensor product with a $k$-dimensional maximally entangled state. We exploit the unitary symmetry of the maximally entangled state to reduce the size of the corresponding semidefinite programs (SDP). For example, for $k=2$, the SDP is reduced from one block of size $2^{N+1} d^{N+1}$ to $\lfloor \frac{N+1}{2} \rfloor$ blocks of size $\approx O( (N-1)^{-1} 2^{N+1} d^{N+1} )$.
Related papers
- On the Intrinsic Dimensions of Data in Kernel Learning [1.675218291152252]
We show that for kernels such as the Laplace kernel, the effective dimension $d_K$ can be significantly smaller than the Minkowski dimension $d_$ provably holds on regular domains.<n>Our results show that, for kernels such as the Laplace kernel, the effective dimension $d_K$ can be significantly smaller than the Minkowski dimension $d_$ provably holds on regular domains.
arXiv Detail & Related papers (2026-01-22T17:32:24Z) - Learning with Norm Constrained, Over-parameterized, Two-layer Neural Networks [54.177130905659155]
Recent studies show that a reproducing kernel Hilbert space (RKHS) is not a suitable space to model functions by neural networks.
In this paper, we study a suitable function space for over- parameterized two-layer neural networks with bounded norms.
arXiv Detail & Related papers (2024-04-29T15:04:07Z) - Lower bounds on entanglement entropy without twin copy [0.0]
We calculate the Shannon entropy $S_ABX$ associated with the bitstrings of adiabatically prepared ground states.<n>We show that for a broad range of lattice spacing and detuning, $IX_AB$ is typically 20 percent below $S_AvN$ in regions where $S_AvN$ is large.
arXiv Detail & Related papers (2024-04-15T17:02:17Z) - Estimating the Mixing Coefficients of Geometrically Ergodic Markov
Processes [5.00389879175348]
We estimate the individual $beta$-mixing coefficients of a real-valued geometrically ergodic Markov process from a single sample-path.
Naturally no density assumptions are required in this setting; the expected error rate is shown to be of order $mathcal O(log(n) n-1/2)$.
arXiv Detail & Related papers (2024-02-11T20:17:10Z) - On the $O(\frac{\sqrt{d}}{T^{1/4}})$ Convergence Rate of RMSProp and Its Momentum Extension Measured by $\ell_1$ Norm [54.28350823319057]
This paper considers the RMSProp and its momentum extension and establishes the convergence rate of $frac1Tsum_k=1T.<n>Our convergence rate matches the lower bound with respect to all the coefficients except the dimension $d$.<n>Our convergence rate can be considered to be analogous to the $frac1Tsum_k=1T.
arXiv Detail & Related papers (2024-02-01T07:21:32Z) - Near Sample-Optimal Reduction-based Policy Learning for Average Reward
MDP [58.13930707612128]
This work considers the sample complexity of obtaining an $varepsilon$-optimal policy in an average reward Markov Decision Process (AMDP)
We prove an upper bound of $widetilde O(H varepsilon-3 ln frac1delta)$ samples per state-action pair, where $H := sp(h*)$ is the span of bias of any optimal policy, $varepsilon$ is the accuracy and $delta$ is the failure probability.
arXiv Detail & Related papers (2022-12-01T15:57:58Z) - Average-Case Complexity of Tensor Decomposition for Low-Degree
Polynomials [93.59919600451487]
"Statistical-computational gaps" occur in many statistical inference tasks.
We consider a model for random order-3 decomposition where one component is slightly larger in norm than the rest.
We show that tensor entries can accurately estimate the largest component when $ll n3/2$ but fail to do so when $rgg n3/2$.
arXiv Detail & Related papers (2022-11-10T00:40:37Z) - The Approximate Degree of DNF and CNF Formulas [95.94432031144716]
For every $delta>0,$ we construct CNF and formulas of size with approximate degree $Omega(n1-delta),$ essentially matching the trivial upper bound of $n.
We show that for every $delta>0$, these models require $Omega(n1-delta)$, $Omega(n/4kk2)1-delta$, and $Omega(n/4kk2)1-delta$, respectively.
arXiv Detail & Related papers (2022-09-04T10:01:39Z) - Classical shadows of fermions with particle number symmetry [0.0]
We provide an estimator for any $k$-RDM with $mathcalO(k2eta)$ classical complexity.
Our method, in the worst-case of half-filling, still provides a factor of $4k$ advantage in sample complexity.
arXiv Detail & Related papers (2022-08-18T17:11:12Z) - Near-Linear Time and Fixed-Parameter Tractable Algorithms for Tensor
Decompositions [51.19236668224547]
We study low rank approximation of tensors, focusing on the tensor train and Tucker decompositions.
For tensor train decomposition, we give a bicriteria $(1 + eps)$-approximation algorithm with a small bicriteria rank and $O(q cdot nnz(A))$ running time.
In addition, we extend our algorithm to tensor networks with arbitrary graphs.
arXiv Detail & Related papers (2022-07-15T11:55:09Z) - Computational Complexity of Normalizing Constants for the Product of
Determinantal Point Processes [12.640283469603357]
We study the computational complexity of computing the normalizing constant.
We show that $sum_Sdet(bf A_S,S)p$ exactly for every (fixed) positive even integer $p$ is UP-hard and Mod$_3$P-hard.
arXiv Detail & Related papers (2021-11-28T14:08:25Z) - Breaking The Dimension Dependence in Sparse Distribution Estimation
under Communication Constraints [18.03695167610009]
We show that when sample size $n$ exceeds a minimum threshold $n*(s, d, b)$, we can achieve an $ell$ estimation error of $Oleft( fracsn2bright)$.
For the interactive setting, we propose a novel tree-based estimation scheme and show that the minimum sample-size needed to achieve dimension-free convergence can be further reduced to $n*(s, d, b)$.
arXiv Detail & Related papers (2021-06-16T07:52:14Z) - SDP Achieves Exact Minimax Optimality in Phase Synchronization [19.909352968029584]
We study the phase synchronization problem with noisy measurements $Y=z*z*+sigma WinmathbbCntimes ntimes n.
We prove that SDP achieves error bound $ (1+o)fracnp22np$ under a squared $ell$ loss.
arXiv Detail & Related papers (2021-01-07T03:14:05Z) - Sparse sketches with small inversion bias [79.77110958547695]
Inversion bias arises when averaging estimates of quantities that depend on the inverse covariance.
We develop a framework for analyzing inversion bias, based on our proposed concept of an $(epsilon,delta)$-unbiased estimator for random matrices.
We show that when the sketching matrix $S$ is dense and has i.i.d. sub-gaussian entries, the estimator $(epsilon,delta)$-unbiased for $(Atop A)-1$ with a sketch of size $m=O(d+sqrt d/
arXiv Detail & Related papers (2020-11-21T01:33:15Z) - Curse of Dimensionality on Randomized Smoothing for Certifiable
Robustness [151.67113334248464]
We show that extending the smoothing technique to defend against other attack models can be challenging.
We present experimental results on CIFAR to validate our theory.
arXiv Detail & Related papers (2020-02-08T22:02:14Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.