Efficiently stable presentations from error-correcting codes
- URL: http://arxiv.org/abs/2311.04681v1
- Date: Wed, 8 Nov 2023 13:40:13 GMT
- Title: Efficiently stable presentations from error-correcting codes
- Authors: Michael Chapman, Thomas Vidick, Henry Yuen
- Abstract summary: We provide a general method for constructing presentations of $mathbbZk$ from linear error-correcting codes.
We observe that the resulting presentation has a weak form of stability exactly when the code is emphtestable.
While we cannot show that this is the case in general, we leverage recent results in the study of non-local games in quantum information theory.
- Score: 5.69361786082969
- License: http://creativecommons.org/licenses/by-sa/4.0/
- Abstract: We introduce a notion of \emph{efficient stability} for finite presentations
of groups. Informally, a finite presentation using generators $S$ and relations
$R$ is \emph{stable} if any map from $S$ to unitaries that approximately
satisfies the relations (in the tracial norm) is close to the restriction of a
representation of $G$ to the subset $S$. This notion and variants thereof have
been extensively studied in recent years, in part motivated by connections to
property testing in computer science. The novelty in our work is the focus on
\emph{efficiency}, which, informally, places an onus on small presentations --
in the sense of encoding length. The goal in this setup is to achieve
non-trivial tradeoffs between the presentation length and its modulus of
stability.
With this goal in mind we analyze various natural examples of presentations.
We provide a general method for constructing presentations of $\mathbb{Z}_2^k$
from linear error-correcting codes. We observe that the resulting presentation
has a weak form of stability exactly when the code is \emph{testable}. This
raises the question of whether testable codes give rise to genuinely stable
presentations using this method. While we cannot show that this is the case in
general, we leverage recent results in the study of non-local games in quantum
information theory (Ji et al., Discrete Analysis 2021) to show that a specific
instantiation of our construction, based on the Reed-Muller family of codes,
leads to a stable presentation of $\mathbb{Z}_2^k$ of size polylog$(k)$ only.
As an application, we combine this result with recent work of de la Salle
(arXiv:2204.07084) to re-derive the quantum low-degree test of Natarajan and
Vidick (IEEE FOCS'18), which is a key building block in the recent refutation
of Connes' Embedding Problem via complexity theory (Ji et al.,
arXiv:2001.04383).
Related papers
- Monge-Kantorovich Fitting With Sobolev Budgets [6.748324975906262]
We quantify the performance of the approximation with the Monge-Kantorovich $p$-cost.
We may then reform the problem as minimizing a functional $mathscrJ_p(f)$ under a constraint on the Sobolev budget.
arXiv Detail & Related papers (2024-09-25T01:30:16Z) - Fast Rates for Bandit PAC Multiclass Classification [73.17969992976501]
We study multiclass PAC learning with bandit feedback, where inputs are classified into one of $K$ possible labels and feedback is limited to whether or not the predicted labels are correct.
Our main contribution is in designing a novel learning algorithm for the agnostic $(varepsilon,delta)$PAC version of the problem.
arXiv Detail & Related papers (2024-06-18T08:54:04Z) - A Unified Framework for Uniform Signal Recovery in Nonlinear Generative
Compressed Sensing [68.80803866919123]
Under nonlinear measurements, most prior results are non-uniform, i.e., they hold with high probability for a fixed $mathbfx*$ rather than for all $mathbfx*$ simultaneously.
Our framework accommodates GCS with 1-bit/uniformly quantized observations and single index models as canonical examples.
We also develop a concentration inequality that produces tighter bounds for product processes whose index sets have low metric entropy.
arXiv Detail & Related papers (2023-09-25T17:54:19Z) - Tighter PAC-Bayes Bounds Through Coin-Betting [31.148069991567215]
We show how to refine the proof strategy of the PAC-Bayes bounds and achieve empheven tighter guarantees.
We derive the first PAC-Bayes concentration inequality based on the coin-betting approach that holds simultaneously for all sample sizes.
arXiv Detail & Related papers (2023-02-12T01:16:27Z) - Near-optimal fitting of ellipsoids to random points [68.12685213894112]
A basic problem of fitting an ellipsoid to random points has connections to low-rank matrix decompositions, independent component analysis, and principal component analysis.
We resolve this conjecture up to logarithmic factors by constructing a fitting ellipsoid for some $n = Omega(, d2/mathrmpolylog(d),)$.
Our proof demonstrates feasibility of the least squares construction of Saunderson et al. using a convenient decomposition of a certain non-standard random matrix.
arXiv Detail & Related papers (2022-08-19T18:00:34Z) - Learning a Single Neuron with Adversarial Label Noise via Gradient
Descent [50.659479930171585]
We study a function of the form $mathbfxmapstosigma(mathbfwcdotmathbfx)$ for monotone activations.
The goal of the learner is to output a hypothesis vector $mathbfw$ that $F(mathbbw)=C, epsilon$ with high probability.
arXiv Detail & Related papers (2022-06-17T17:55:43Z) - There is no Accuracy-Interpretability Tradeoff in Reinforcement Learning
for Mazes [64.05903267230467]
Interpretability is an essential building block for trustworthiness in reinforcement learning systems.
We show that in certain cases, one can achieve policy interpretability while maintaining its optimality.
arXiv Detail & Related papers (2022-06-09T04:23:26Z) - Learning (Very) Simple Generative Models Is Hard [45.13248517769758]
We show that no-time algorithm can solve problem even when output coordinates of $mathbbRdtobbRd'$ are one-hidden-layer ReLU networks with $mathrmpoly(d)$ neurons.
Key ingredient in our proof is an ODE-based construction of a compactly supported, piecewise-linear function $f$ with neurally-bounded slopes such that the pushforward of $mathcalN(0,1)$ under $f$ matches all low-degree moments of $mathcal
arXiv Detail & Related papers (2022-05-31T17:59:09Z) - A Law of Robustness beyond Isoperimetry [84.33752026418045]
We prove a Lipschitzness lower bound $Omega(sqrtn/p)$ of robustness of interpolating neural network parameters on arbitrary distributions.
We then show the potential benefit of overparametrization for smooth data when $n=mathrmpoly(d)$.
We disprove the potential existence of an $O(1)$-Lipschitz robust interpolating function when $n=exp(omega(d))$.
arXiv Detail & Related papers (2022-02-23T16:10:23Z) - Classical Dynamics from Self-Consistency Equations in Quantum Mechanics
-- Extended Version [0.0]
We propose a new mathematical approach to Bona's non-linear generalization of quantum mechanics.
It highlights the central role of self-consistency.
Some new mathematical concepts are introduced, which are possibly interesting by themselves.
arXiv Detail & Related papers (2020-09-10T16:20:25Z) - A New Minimax Theorem for Randomized Algorithms [1.2284934135116514]
We introduce a new type of minimax theorem which can provide a hard distribution $mu$ that works for all bias levels at once.
We show that this works for randomized query complexity, randomized communication complexity, approximate degreelemma, and approximate logrank.
We also prove an improved version of Impagliazzo's hardcore.
arXiv Detail & Related papers (2020-02-25T11:46:08Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.