Finite-Depth, Finite-Shot Guarantees for Constrained Quantum Optimization via Fejér Filtering
- URL: http://arxiv.org/abs/2603.01809v1
- Date: Mon, 02 Mar 2026 12:44:19 GMT
- Title: Finite-Depth, Finite-Shot Guarantees for Constrained Quantum Optimization via Fejér Filtering
- Authors: Chinonso Onah, Kristel Michielsen,
- Abstract summary: We show that restricting cost angles to a harmonic lattice exposes a positive Fejér filter acting on the cost-phase unitary $U_C()=e-iH_C$ emphin a cost-dephased reference model.<n>Under a wrapped phase-separation condition, this yields emphdimension-free finite-depth and finite-shot lower bounds on the success probability of sampling an optimal solution.
- Score: 0.2578242050187029
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: We study finite-layer alternations of the \emph{Constraint--Enhanced Quantum Approximate Optimization Algorithm} (CE--QAOA), a constraint-aware ansatz that operates natively on block one-hot manifolds. Our focus is on feasibility and optimality guarantees. We show that restricting cost angles to a harmonic lattice exposes a positive Fejér filter acting on the cost-phase unitary $U_C(γ)=e^{-iγH_C}$ \emph{in a cost-dephased reference model (used only for analysis)}. Under a wrapped phase-separation condition, this yields \emph{dimension-free} finite-depth and finite-shot lower bounds on the success probability of sampling an optimal solution. In particular, we obtain a ratio-form guarantee \[ q_0 \;\ge\; \frac{x}{1+x}, \qquad x \;=\; (p{+}1)^2 \sin^2(δ/2)\,C_β, \] where $q_0$ is the single-shot success probability, $C_β$ is the mixer-envelope mass on the optimal set, $δ$ is a phase-gap proxy, and $p$ is the number of layers. Riemann--Lebesgue averaging extends the discussion beyond exact lattice normalization. We conclude by outlining coherent realizations of hardware-efficient positive spectral filters as a main open direction.
Related papers
- Power Homotopy for Zeroth-Order Non-Convex Optimizations [5.737648067191245]
GS-Power is a novel zeroth-order method for non-dimensional optimization problems.<n>It consistently ranks among the top three across a suite of competing algorithms.<n>It achieved first place at least-likely targeted black-box attacks against images from ImageNet.
arXiv Detail & Related papers (2025-11-17T16:54:30Z) - Projection by Convolution: Optimal Sample Complexity for Reinforcement Learning in Continuous-Space MDPs [56.237917407785545]
We consider the problem of learning an $varepsilon$-optimal policy in a general class of continuous-space Markov decision processes (MDPs) having smooth Bellman operators.
Key to our solution is a novel projection technique based on ideas from harmonic analysis.
Our result bridges the gap between two popular but conflicting perspectives on continuous-space MDPs.
arXiv Detail & Related papers (2024-05-10T09:58:47Z) - Nearly Minimax Optimal Regret for Learning Linear Mixture Stochastic
Shortest Path [80.60592344361073]
We study the Shortest Path (SSP) problem with a linear mixture transition kernel.
An agent repeatedly interacts with a environment and seeks to reach certain goal state while minimizing the cumulative cost.
Existing works often assume a strictly positive lower bound of the iteration cost function or an upper bound of the expected length for the optimal policy.
arXiv Detail & Related papers (2024-02-14T07:52:00Z) - Revisiting Subgradient Method: Complexity and Convergence Beyond Lipschitz Continuity [24.45688490844496]
Subgradient method is one of the most fundamental algorithmic schemes for nonsmooth optimization.
In this work, we first extend the typical iteration complexity results for the subgradient method to cover non-Lipschitz convex and weakly convex minimization.
arXiv Detail & Related papers (2023-05-23T15:26:36Z) - Breaking the Lower Bound with (Little) Structure: Acceleration in
Non-Convex Stochastic Optimization with Heavy-Tailed Noise [28.780192812703948]
We consider the optimization problem with smooth but not necessarily convex objectives in the heavy-tailed noise regime.
We show that one can achieve a faster rate than that dictated by the lower bound $Omega(Tfrac1-p3p-2)$ with only tiny bit of structure.
We also establish that it guarantees a high-probability convergence rate of $O(log(T/delta)Tfrac1-p3p-2)$ under a mild condition.
arXiv Detail & Related papers (2023-02-14T00:23:42Z) - CEDAS: A Compressed Decentralized Stochastic Gradient Method with Improved Convergence [9.11726703830074]
In this paper, we consider solving the distributed optimization problem under the communication restricted setting.
We show the method over com-pressed exact diffusion termed CEDAS"
In particular, when when when when when when when when when when when when when when when when when when when when when when when when when when when when when when when when when when when when when when when when when when when when when when when when when when when when when when when when when when when when when when when when when when when when when when when when when when when when when when when when when when when when when when when
arXiv Detail & Related papers (2023-01-14T09:49:15Z) - ReSQueing Parallel and Private Stochastic Convex Optimization [59.53297063174519]
We introduce a new tool for BFG convex optimization (SCO): a Reweighted Query (ReSQue) estimator for the gradient of a function convolved with a (Gaussian) probability density.
We develop algorithms achieving state-of-the-art complexities for SCO in parallel and private settings.
arXiv Detail & Related papers (2023-01-01T18:51:29Z) - Optimal Extragradient-Based Bilinearly-Coupled Saddle-Point Optimization [116.89941263390769]
We consider the smooth convex-concave bilinearly-coupled saddle-point problem, $min_mathbfxmax_mathbfyF(mathbfx) + H(mathbfx,mathbfy)$, where one has access to first-order oracles for $F$, $G$ as well as the bilinear coupling function $H$.
We present a emphaccelerated gradient-extragradient (AG-EG) descent-ascent algorithm that combines extragrad
arXiv Detail & Related papers (2022-06-17T06:10:20Z) - Settling the Sample Complexity of Model-Based Offline Reinforcement
Learning [50.5790774201146]
offline reinforcement learning (RL) learns using pre-collected data without further exploration.
Prior algorithms or analyses either suffer from suboptimal sample complexities or incur high burn-in cost to reach sample optimality.
We demonstrate that the model-based (or "plug-in") approach achieves minimax-optimal sample complexity without burn-in cost.
arXiv Detail & Related papers (2022-04-11T17:26:19Z) - Optimal Control for Closed and Open System Quantum Optimization [0.0]
We provide a rigorous analysis of the quantum optimal control problem in the setting of a linear combination $s(t)B+ (1-s(t))C$ of two noncommuting Hamiltonians.
The target is to minimize the energy of the final problem'' Hamiltonian $C$, for a time-dependent and bounded control schedule.
arXiv Detail & Related papers (2021-07-07T22:57:57Z) - Last iterate convergence of SGD for Least-Squares in the Interpolation
regime [19.05750582096579]
We study the noiseless model in the fundamental least-squares setup.
We assume that an optimum predictor fits perfectly inputs and outputs $langle theta_*, phi(X) rangle = Y$, where $phi(X)$ stands for a possibly infinite dimensional non-linear feature map.
arXiv Detail & Related papers (2021-02-05T14:02:20Z) - Sample Complexity of Asynchronous Q-Learning: Sharper Analysis and
Variance Reduction [63.41789556777387]
Asynchronous Q-learning aims to learn the optimal action-value function (or Q-function) of a Markov decision process (MDP)
We show that the number of samples needed to yield an entrywise $varepsilon$-accurate estimate of the Q-function is at most on the order of $frac1mu_min (1-gamma)5varepsilon2+ fract_mixmu_min (1-gamma)$ up to some logarithmic factor.
arXiv Detail & Related papers (2020-06-04T17:51:00Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.