PAC global optimization for VQE in low-curvature geometric regimes
- URL: http://arxiv.org/abs/2511.14628v1
- Date: Tue, 18 Nov 2025 16:18:26 GMT
- Title: PAC global optimization for VQE in low-curvature geometric regimes
- Authors: Benjamin Asch,
- Abstract summary: Noise-robust guarantees of global $varepsilon$-optimality for the Variational Quantum Eigensolver.<n>A Morse--Bott submanifold satisfies fiber regularity with respect to coordinate-aligned, embedded flats.
- Score: 0.3553493344868413
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We give noise-robust, Probably Approximately Correct (PAC) guarantees of global $\varepsilon$-optimality for the Variational Quantum Eigensolver under explicit geometric conditions. For periodic ansatzes with bounded generators -- yielding a globally Lipschitz cost landscape on a toroidal parameter space -- we assume that the low-energy region containing the global minimum is a Morse--Bott submanifold whose normal Hessian has rank $r = O(\log p)$ for $p$ parameters, and which satisfies polynomial fiber regularity with respect to coordinate-aligned, embedded flats. This low-curvature-dimensional structure serves as a model for regimes in which only a small number of directions control energy variation, and is consistent with mechanisms such as strong parameter tying together with locality in specific multiscale and tied shallow architectures. Under this assumption, the sample complexity required to find an $\varepsilon$-optimal region with confidence $1-δ$ scales with the curvature dimension $r$ rather than the ambient dimension $p$. With probability at least $1-δ$, the algorithm outputs a region in which all points are $\varepsilon$-optimal, and at least one lies within a bounded neighborhood of the global minimum. The resulting complexity is quasi-polynomial in $p$ and $\varepsilon^{-1}$ and logarithmic in $δ^{-1}$. This identifies a geometric regime in which high-probability global optimization remains feasible despite shot noise.
Related papers
- An Information-Minimal Geometry for Qubit-Efficient Optimization [0.0]
We recast qubit-efficient optimization as a geometry problem.<n>Local-consistency problem coincides exactly with the Sherali-Adams level-2 polytope $mathrmSA(2)$.
arXiv Detail & Related papers (2025-11-11T15:38:57Z) - Can SGD Handle Heavy-Tailed Noise? [6.111519084375339]
Gradient Descent (SGD) is a machine learning project of large-scale optimization, yet its theoretical behavior under heavy-tailed noise is poorly understood.<n>We rigorously investigate whether SGD, can provably succeed under such adverse conditions.
arXiv Detail & Related papers (2025-08-06T20:09:41Z) - Projection by Convolution: Optimal Sample Complexity for Reinforcement Learning in Continuous-Space MDPs [56.237917407785545]
We consider the problem of learning an $varepsilon$-optimal policy in a general class of continuous-space Markov decision processes (MDPs) having smooth Bellman operators.
Key to our solution is a novel projection technique based on ideas from harmonic analysis.
Our result bridges the gap between two popular but conflicting perspectives on continuous-space MDPs.
arXiv Detail & Related papers (2024-05-10T09:58:47Z) - Learning with Norm Constrained, Over-parameterized, Two-layer Neural Networks [54.177130905659155]
Recent studies show that a reproducing kernel Hilbert space (RKHS) is not a suitable space to model functions by neural networks.
In this paper, we study a suitable function space for over- parameterized two-layer neural networks with bounded norms.
arXiv Detail & Related papers (2024-04-29T15:04:07Z) - ReSQueing Parallel and Private Stochastic Convex Optimization [59.53297063174519]
We introduce a new tool for BFG convex optimization (SCO): a Reweighted Query (ReSQue) estimator for the gradient of a function convolved with a (Gaussian) probability density.
We develop algorithms achieving state-of-the-art complexities for SCO in parallel and private settings.
arXiv Detail & Related papers (2023-01-01T18:51:29Z) - Best Policy Identification in Linear MDPs [70.57916977441262]
We investigate the problem of best identification in discounted linear Markov+Delta Decision in the fixed confidence setting under a generative model.
The lower bound as the solution of an intricate non- optimization program can be used as the starting point to devise such algorithms.
arXiv Detail & Related papers (2022-08-11T04:12:50Z) - Randomized Coordinate Subgradient Method for Nonsmooth Composite
Optimization [11.017632675093628]
Coordinate-type subgradient methods for addressing nonsmooth problems are relatively underexplored due to the set of properties of the Lipschitz-type assumption.
arXiv Detail & Related papers (2022-06-30T02:17:11Z) - Stochastic Zeroth order Descent with Structured Directions [10.604744518360464]
We introduce and analyze Structured Zeroth order Descent (SSZD), a finite difference approach that approximates a gradient on a set $lleq d directions, where $d is the dimension of the ambient space.
For convex convex we prove almost sure convergence of functions on $O( (d/l) k-c1/2$)$ for every $c1/2$, which is arbitrarily close to the one of the Gradient Descent (SGD) in terms of one number of iterations.
arXiv Detail & Related papers (2022-06-10T14:00:06Z) - Non-Euclidean Differentially Private Stochastic Convex Optimization [15.302167005107135]
We show that noisy gradient descent (SGD) algorithms attain the optimal excess risk in low-dimensional regimes.
Our work draws upon concepts from the geometry of normed spaces, such as the notions of regularity, uniform convexity, and uniform smoothness.
arXiv Detail & Related papers (2021-03-01T19:48:44Z) - Finding Global Minima via Kernel Approximations [90.42048080064849]
We consider the global minimization of smooth functions based solely on function evaluations.
In this paper, we consider an approach that jointly models the function to approximate and finds a global minimum.
arXiv Detail & Related papers (2020-12-22T12:59:30Z) - Small Covers for Near-Zero Sets of Polynomials and Learning Latent
Variable Models [56.98280399449707]
We show that there exists an $epsilon$-cover for $S$ of cardinality $M = (k/epsilon)O_d(k1/d)$.
Building on our structural result, we obtain significantly improved learning algorithms for several fundamental high-dimensional probabilistic models hidden variables.
arXiv Detail & Related papers (2020-12-14T18:14:08Z) - Optimal Robust Linear Regression in Nearly Linear Time [97.11565882347772]
We study the problem of high-dimensional robust linear regression where a learner is given access to $n$ samples from the generative model $Y = langle X,w* rangle + epsilon$
We propose estimators for this problem under two settings: (i) $X$ is L4-L2 hypercontractive, $mathbbE [XXtop]$ has bounded condition number and $epsilon$ has bounded variance and (ii) $X$ is sub-Gaussian with identity second moment and $epsilon$ is
arXiv Detail & Related papers (2020-07-16T06:44:44Z) - Randomized Bregman Coordinate Descent Methods for Non-Lipschitz
Optimization [31.474280642125734]
A new textitrandomized Bregman (block) coordinate descent (CD) method is proposed.
We show that the proposed method is $O(epsilon-2n)$ to achieve $epsilon-stationary point, where $n$ is the number of blocks of coordinates.
arXiv Detail & Related papers (2020-01-15T09:57:38Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.