Mildly-Interacting Fermionic Unitaries are Efficiently Learnable
- URL: http://arxiv.org/abs/2504.11318v1
- Date: Tue, 15 Apr 2025 15:59:32 GMT
- Title: Mildly-Interacting Fermionic Unitaries are Efficiently Learnable
- Authors: Vishnu Iyer,
- Abstract summary: We show that an algorithm can learn near-Gaussian fermionic unitaries of Gaussian dimension at least $2n - O(t)$ in time.<n>We also prove structural results about near-Gaussian fermionic unitaries that are likely to be of independent interest.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Recent work has shown that one can efficiently learn fermionic Gaussian unitaries, also commonly known as nearest-neighbor matchcircuits or non-interacting fermionic unitaries. However, one could ask a similar question about unitaries that are near Gaussian: for example, unitaries prepared with a small number of non-Gaussian circuit elements. These operators find significance in quantum chemistry and many-body physics, yet no algorithm exists to learn them. We give the first such result by devising an algorithm which makes queries to a $n$-mode fermionic unitary $U$ prepared by at most $O(t)$ non-Gaussian gates and returns a circuit approximating $U$ to diamond distance $\varepsilon$ in time $\textrm{poly}(n,2^t,1/\varepsilon)$. This resolves a central open question of Mele and Herasymenko under the strongest distance metric. In fact, our algorithm is much more general: we define a property of unitary Gaussianity known as unitary Gaussian dimension and show that our algorithm can learn $n$-mode unitaries of Gaussian dimension at least $2n - O(t)$ in time $\textrm{poly}(n,2^t,1/\varepsilon)$. Indeed, this class subsumes unitaries prepared by at most $O(t)$ non-Gaussian gates but also includes several unitaries that require up to $2^{O(t)}$ non-Gaussian gates to construct. In addition, we give a $\textrm{poly}(n,1/\varepsilon)$-time algorithm to distinguish whether an $n$-mode unitary is of Gaussian dimension at least $k$ or $\varepsilon$-far from all such unitaries in Frobenius distance, promised that one is the case. Along the way, we prove structural results about near-Gaussian fermionic unitaries that are likely to be of independent interest.
Related papers
- Efficiently learning fermionic unitaries with few non-Gaussian gates [0.0]
We present a learning algorithm that learns an $n$-mode circuit containing parity-preserving non-Gaussian gates.
Our approach relies on learning approximate Gaussian unitaries that transform the circuit into one that acts non-trivially only on a constant number of Majorana operators.
arXiv Detail & Related papers (2025-04-21T18:02:39Z) - Implicit High-Order Moment Tensor Estimation and Learning Latent Variable Models [39.33814194788341]
We develop a general efficient algorithm for em implicit moment tensor complexity.<n>By leveraging our implicit moment estimation algorithm, we obtain the first $mathrmpoly(d, k, 1/epsilon)$-time learning algorithms for the following models.
arXiv Detail & Related papers (2024-11-23T23:13:24Z) - Efficient Statistics With Unknown Truncation, Polynomial Time Algorithms, Beyond Gaussians [7.04316974339151]
We study the estimation of distributional parameters when samples are shown only if they fall in some unknown set.
We develop tools that may be of independent interest, including a reduction from PAC learning with positive and unlabeled samples to PAC learning with positive and negative samples.
arXiv Detail & Related papers (2024-10-02T15:21:07Z) - A Unified Framework for Uniform Signal Recovery in Nonlinear Generative
Compressed Sensing [68.80803866919123]
Under nonlinear measurements, most prior results are non-uniform, i.e., they hold with high probability for a fixed $mathbfx*$ rather than for all $mathbfx*$ simultaneously.
Our framework accommodates GCS with 1-bit/uniformly quantized observations and single index models as canonical examples.
We also develop a concentration inequality that produces tighter bounds for product processes whose index sets have low metric entropy.
arXiv Detail & Related papers (2023-09-25T17:54:19Z) - Near-Optimal Bounds for Learning Gaussian Halfspaces with Random
Classification Noise [50.64137465792738]
We show that any efficient SQ algorithm for the problem requires sample complexity at least $Omega(d1/2/(maxp, epsilon)2)$.
Our lower bound suggests that this quadratic dependence on $1/epsilon$ is inherent for efficient algorithms.
arXiv Detail & Related papers (2023-07-13T18:59:28Z) - Learning a Single Neuron with Adversarial Label Noise via Gradient
Descent [50.659479930171585]
We study a function of the form $mathbfxmapstosigma(mathbfwcdotmathbfx)$ for monotone activations.
The goal of the learner is to output a hypothesis vector $mathbfw$ that $F(mathbbw)=C, epsilon$ with high probability.
arXiv Detail & Related papers (2022-06-17T17:55:43Z) - Low-degree learning and the metric entropy of polynomials [44.99833362998488]
We prove that any (deterministic or randomized) algorithm which learns $mathscrF_nd$ with $L$-accuracy $varepsilon$ requires at least $Omega(sqrtvarepsilon)2dlog n leq log mathsfM(mathscrF_n,d,|cdot|_L,varepsilon) satisfies the two-sided estimate $$c (1-varepsilon)2dlog
arXiv Detail & Related papers (2022-03-17T23:52:08Z) - Small Covers for Near-Zero Sets of Polynomials and Learning Latent
Variable Models [56.98280399449707]
We show that there exists an $epsilon$-cover for $S$ of cardinality $M = (k/epsilon)O_d(k1/d)$.
Building on our structural result, we obtain significantly improved learning algorithms for several fundamental high-dimensional probabilistic models hidden variables.
arXiv Detail & Related papers (2020-12-14T18:14:08Z) - $k$-Forrelation Optimally Separates Quantum and Classical Query
Complexity [3.4984289152418753]
We show that any partial function on $N$ bits can be computed with an advantage $delta$ over a random guess by making $q$ quantum queries.
We also conjectured the $k$-Forrelation problem -- a partial function that can be computed with $q = lceil k/2 rceil$ quantum queries.
arXiv Detail & Related papers (2020-08-16T21:26:46Z) - Streaming Complexity of SVMs [110.63976030971106]
We study the space complexity of solving the bias-regularized SVM problem in the streaming model.
We show that for both problems, for dimensions of $frac1lambdaepsilon$, one can obtain streaming algorithms with spacely smaller than $frac1lambdaepsilon$.
arXiv Detail & Related papers (2020-07-07T17:10:00Z) - Model-Free Reinforcement Learning: from Clipped Pseudo-Regret to Sample
Complexity [59.34067736545355]
Given an MDP with $S$ states, $A$ actions, the discount factor $gamma in (0,1)$, and an approximation threshold $epsilon > 0$, we provide a model-free algorithm to learn an $epsilon$-optimal policy.
For small enough $epsilon$, we show an improved algorithm with sample complexity.
arXiv Detail & Related papers (2020-06-06T13:34:41Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.