Learning the closest product state
- URL: http://arxiv.org/abs/2411.04283v3
- Date: Mon, 31 Mar 2025 23:34:13 GMT
- Title: Learning the closest product state
- Authors: Ainesh Bakshi, John Bostanci, William Kretschmer, Zeph Landau, Jerry Li, Allen Liu, Ryan O'Donnell, Ewin Tang,
- Abstract summary: We study the problem of finding a (pure) product state with optimal fidelity to an unknown $n$-qubit quantum state $rho$, given copies of $rho$.<n>We give an algorithm which finds a product fidelity $varepsilon$-close to optimal, using $N = ntextpoly (1/varepsilon)$ copies of $rho$ and $textpoly(N)$ classical overhead.
- Score: 12.421740476704759
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We study the problem of finding a (pure) product state with optimal fidelity to an unknown $n$-qubit quantum state $\rho$, given copies of $\rho$. This is a basic instance of a fundamental question in quantum learning: is it possible to efficiently learn a simple approximation to an arbitrary state? We give an algorithm which finds a product state with fidelity $\varepsilon$-close to optimal, using $N = n^{\text{poly}(1/\varepsilon)}$ copies of $\rho$ and $\text{poly}(N)$ classical overhead. We further show that estimating the optimal fidelity is NP-hard for error $\varepsilon = 1/\text{poly}(n)$, showing that the error dependence cannot be significantly improved. For our algorithm, we build a carefully-defined cover over candidate product states, qubit by qubit, and then demonstrate that extending the cover can be reduced to approximate constrained polynomial optimization. For our proof of hardness, we give a formal reduction from polynomial optimization to finding the closest product state. Together, these results demonstrate a fundamental connection between these two seemingly unrelated questions. Building on our general approach, we also develop more efficient algorithms in three simpler settings: when the optimal fidelity exceeds $5/6$; when we restrict ourselves to a discrete class of product states; and when we are allowed to output a matrix product state.
Related papers
- Near-Optimal Quantum Algorithm for Minimizing the Maximal Loss [16.91814406135565]
We conduct a systematic study for quantum algorithms and lower bounds for minimizing the maximum of $N$ convex, Lipschitz functions.
We prove that quantum algorithms must take $tildeOmega(sqrtNepsilon-2/3)$ queries to a first order quantum oracle.
arXiv Detail & Related papers (2024-02-20T06:23:36Z) - Certified Multi-Fidelity Zeroth-Order Optimization [4.450536872346658]
We consider the problem of multi-fidelity zeroth-order optimization, where one can evaluate a function $f$ at various approximation levels.
We propose a certified variant of the MFDOO algorithm and derive a bound on its cost complexity for any Lipschitz function $f$.
We also prove an $f$-dependent lower bound showing that this algorithm has a near-optimal cost complexity.
arXiv Detail & Related papers (2023-08-02T07:20:37Z) - Tight Bounds for Quantum Phase Estimation and Related Problems [0.90238471756546]
We show that a phase estimation algorithm with precision $delta$ and error probability $epsilon$ has cost $Omegaleft(frac1deltalog1epsilonright)$.
We also show that having lots advice (applications of the advice-preparing unitary) does not significantly reduce cost, and neither does knowledge of the eigenbasis of $U$.
arXiv Detail & Related papers (2023-05-08T17:46:01Z) - Reaching Goals is Hard: Settling the Sample Complexity of the Stochastic
Shortest Path [106.37656068276902]
We study the sample complexity of learning an $epsilon$-optimal policy in the Shortest Path (SSP) problem.
We derive complexity bounds when the learner has access to a generative model.
We show that there exists a worst-case SSP instance with $S$ states, $A$ actions, minimum cost $c_min$, and maximum expected cost of the optimal policy over all states $B_star$.
arXiv Detail & Related papers (2022-10-10T18:34:32Z) - Best Policy Identification in Linear MDPs [70.57916977441262]
We investigate the problem of best identification in discounted linear Markov+Delta Decision in the fixed confidence setting under a generative model.
The lower bound as the solution of an intricate non- optimization program can be used as the starting point to devise such algorithms.
arXiv Detail & Related papers (2022-08-11T04:12:50Z) - Learning to Order for Inventory Systems with Lost Sales and Uncertain
Supplies [21.690446677016247]
We consider a lost-sales inventory control system with a lead time $L$ over a planning horizon $T$. Supply is uncertain, and is a function of the order quantity.
We show that our algorithm achieves a regret (i.e. the performance gap between the cost of our algorithm and that of an optimal policy over $T$ periods) of $O(L+sqrtT)$ when $Lgeqlog(T)$.
arXiv Detail & Related papers (2022-07-10T22:11:32Z) - TURF: A Two-factor, Universal, Robust, Fast Distribution Learning
Algorithm [64.13217062232874]
One of its most powerful and successful modalities approximates every distribution to an $ell$ distance essentially at most a constant times larger than its closest $t$-piece degree-$d_$.
We provide a method that estimates this number near-optimally, hence helps approach the best possible approximation.
arXiv Detail & Related papers (2022-02-15T03:49:28Z) - Empirical Risk Minimization in the Non-interactive Local Model of
Differential Privacy [26.69391745812235]
We study the Empirical Risk Minimization (ERM) problem in the noninteractive Local Differential Privacy (LDP) model.
Previous research indicates that the sample complexity, to achieve error $alpha, needs to be depending on dimensionality $p$ for general loss functions.
arXiv Detail & Related papers (2020-11-11T17:48:00Z) - Revisiting Modified Greedy Algorithm for Monotone Submodular
Maximization with a Knapsack Constraint [75.85952446237599]
We show that a modified greedy algorithm can achieve an approximation factor of $0.305$.
We derive a data-dependent upper bound on the optimum.
It can also be used to significantly improve the efficiency of such algorithms as branch and bound.
arXiv Detail & Related papers (2020-08-12T15:40:21Z) - Model-Free Reinforcement Learning: from Clipped Pseudo-Regret to Sample
Complexity [59.34067736545355]
Given an MDP with $S$ states, $A$ actions, the discount factor $gamma in (0,1)$, and an approximation threshold $epsilon > 0$, we provide a model-free algorithm to learn an $epsilon$-optimal policy.
For small enough $epsilon$, we show an improved algorithm with sample complexity.
arXiv Detail & Related papers (2020-06-06T13:34:41Z) - Maximizing Determinants under Matroid Constraints [69.25768526213689]
We study the problem of finding a basis $S$ of $M$ such that $det(sum_i in Sv_i v_i v_itop)$ is maximized.
This problem appears in a diverse set of areas such as experimental design, fair allocation of goods, network design, and machine learning.
arXiv Detail & Related papers (2020-04-16T19:16:38Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.