Improved Lower Bounds for Learning Quantum Channels in Diamond Distance
- URL: http://arxiv.org/abs/2601.04180v3
- Date: Tue, 13 Jan 2026 18:58:46 GMT
- Title: Improved Lower Bounds for Learning Quantum Channels in Diamond Distance
- Authors: Aadil Oufkir, Filippo Girardi,
- Abstract summary: We prove that learning an unknown quantum channel with input dimension $d_A$, output dimension $d_B$, and Choi rank $r$ to diamond distance $varepsilon$ requires $!left( fracd_A d_B rvarepsilon log(d_B r / varepsilon) right)$ channel queries when $d_A= rd_B$, and $!left( fracd_A d_B rvarepsilon2 log(d
- Score: 2.1198879079315573
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We prove that learning an unknown quantum channel with input dimension $d_A$, output dimension $d_B$, and Choi rank $r$ to diamond distance $\varepsilon$ requires $ Ω\!\left( \frac{d_A d_B r}{\varepsilon \log(d_B r / \varepsilon)} \right)$ channel queries when $d_A= rd_B$, and $Ω\!\left( \frac{d_A d_B r}{\varepsilon^2 \log(d_B r / \varepsilon)} \right)$ channel queries when $d_A\le rd_B/2$. These lower bounds improve upon the best previous $Ω(d_A d_B r)$ bound by introducing explicit $\varepsilon$-dependence, and they are optimal up to logarithmic factors. The proof constructs ensembles of channels that are well separated in diamond norm yet admit Stinespring isometries that are close in operator norm.
Related papers
- Optimal lower bound for quantum channel tomography in away-from-boundary regime [32.904052887092284]
Heisenberg scaling $(d2/varepsilon)$ is achievable.<n>In particular, this lower bound fully settles the query complexity for the commonly studied case of equal input and output dimensions.
arXiv Detail & Related papers (2026-01-15T18:45:59Z) - Quantum channel tomography and estimation by local test [32.904052887092284]
Heisenberg scaling $O(1/varepsilon)$ can be achieved, even if $mathcalE$ is not a unitary channel.<n>We show that for parallel (possibly coherent) testers, access to dilations does not help.
arXiv Detail & Related papers (2025-12-15T18:07:42Z) - Actively Learning Halfspaces without Synthetic Data [34.777547976926456]
We design efficient algorithms for learning halfspaces without point synthesis.<n>As a corollary, we obtain an optimal $O(d + log n)$ query deterministic learner for axis-aligned halfspaces.<n>Our algorithm solves the more general problem of learning a Boolean function $f$ over $n$ elements which is monotone under at least one of $D$ provided orderings.
arXiv Detail & Related papers (2025-09-25T07:39:25Z) - Near-Optimal Convergence of Accelerated Gradient Methods under Generalized and $(L_0, L_1)$-Smoothness [57.93371273485736]
We study first-order methods for convex optimization problems with functions $f$ satisfying the recently proposed $ell$-smoothness condition $||nabla2f(x)|| le ellleft(||nabla f(x)||right),$ which generalizes the $L$-smoothness and $(L_0,L_1)$-smoothness.
arXiv Detail & Related papers (2025-08-09T08:28:06Z) - Learning junta distributions, quantum junta states, and QAC$^0$ circuits [0.0]
We consider the problems of learning junta distributions, their quantum counterparts (quantum junta states) and $mathsfQAC0$ circuits.<n>We show that $n$-qubit $mathsfQAC0$ circuits with size $s$, depth $d$ and $a$ auxiliary qubits can be learned from $2O(log(s22a)d)log (n)$ copies of the Choi state.
arXiv Detail & Related papers (2024-10-21T09:39:20Z) - Almost Minimax Optimal Best Arm Identification in Piecewise Stationary Linear Bandits [55.957560311008926]
We propose a piecewise stationary linear bandit (PSLB) model where the quality of an arm is measured by its return averaged over all contexts.
PS$varepsilon$BAI$+$ is guaranteed to identify an $varepsilon$-optimal arm with probability $ge 1-delta$ and with a minimal number of samples.
arXiv Detail & Related papers (2024-10-10T06:15:42Z) - Almost-idempotent quantum channels and approximate $C^*$-algebras [0.03922370499388702]
We prove that any finite-dimensional $varepsilon$-$C*$ algebra $A$ is $O(varepsilon)$-isomorphic to some genuine $C*$ algebra $B$.<n>When $A$ comes from a finite-dimensional $eta$-idempotent UCP map $Phi$, the $O(eta)$-isomorphism and its inverse can be realized by UCP maps.
arXiv Detail & Related papers (2024-05-03T18:59:50Z) - Does Sparsity Help in Learning Misspecified Linear Bandits? [32.920577630673804]
We show that algorithms can obtain $O(varepsilon)$-optimal actions by querying $O(varepsilon-sds)$ actions.
We also show that our upper bound on sample complexity is nearly tight if one demands an error $ O(sdeltavarepsilon)$ for $0delta1$.
arXiv Detail & Related papers (2023-03-29T19:58:39Z) - Query-optimal estimation of unitary channels in diamond distance [3.087385668501741]
We consider process tomography for unitary quantum channels.
We output a classical description of a unitary that is $varepsilon$-close to the unknown unitary in diamond norm.
arXiv Detail & Related papers (2023-02-27T19:00:00Z) - Differentially Private Stochastic Gradient Descent with Low-Noise [49.981789906200035]
Modern machine learning algorithms aim to extract fine-grained information from data to provide accurate predictions, which often conflicts with the goal of privacy protection.
This paper addresses the practical and theoretical importance of developing privacy-preserving machine learning algorithms that ensure good performance while preserving privacy.
arXiv Detail & Related papers (2022-09-09T08:54:13Z) - The Approximate Degree of DNF and CNF Formulas [95.94432031144716]
For every $delta>0,$ we construct CNF and formulas of size with approximate degree $Omega(n1-delta),$ essentially matching the trivial upper bound of $n.
We show that for every $delta>0$, these models require $Omega(n1-delta)$, $Omega(n/4kk2)1-delta$, and $Omega(n/4kk2)1-delta$, respectively.
arXiv Detail & Related papers (2022-09-04T10:01:39Z) - Low-degree learning and the metric entropy of polynomials [44.99833362998488]
We prove that any (deterministic or randomized) algorithm which learns $mathscrF_nd$ with $L$-accuracy $varepsilon$ requires at least $Omega(sqrtvarepsilon)2dlog n leq log mathsfM(mathscrF_n,d,|cdot|_L,varepsilon) satisfies the two-sided estimate $$c (1-varepsilon)2dlog
arXiv Detail & Related papers (2022-03-17T23:52:08Z) - Learning low-degree functions from a logarithmic number of random
queries [77.34726150561087]
We prove that for any integer $ninmathbbN$, $din1,ldots,n$ and any $varepsilon,deltain(0,1)$, a bounded function $f:-1,1nto[-1,1]$ of degree at most $d$ can be learned.
arXiv Detail & Related papers (2021-09-21T13:19:04Z) - Nearly Horizon-Free Offline Reinforcement Learning [97.36751930393245]
We revisit offline reinforcement learning on episodic time-homogeneous Markov Decision Processes with $S$ states, $A$ actions and planning horizon $H$.
We obtain the first set of nearly $H$-free sample complexity bounds for evaluation and planning using the empirical MDPs.
arXiv Detail & Related papers (2021-03-25T18:52:17Z) - An Optimal Separation of Randomized and Quantum Query Complexity [67.19751155411075]
We prove that for every decision tree, the absolute values of the Fourier coefficients of a given order $ellsqrtbinomdell (1+log n)ell-1,$ sum to at most $cellsqrtbinomdell (1+log n)ell-1,$ where $n$ is the number of variables, $d$ is the tree depth, and $c>0$ is an absolute constant.
arXiv Detail & Related papers (2020-08-24T06:50:57Z) - $Q$-learning with Logarithmic Regret [60.24952657636464]
We prove that an optimistic $Q$-learning enjoys a $mathcalOleft(fracSAcdot mathrmpolyleft(Hright)Delta_minlogleft(SATright)right)$ cumulative regret bound, where $S$ is the number of states, $A$ is the number of actions, $H$ is the planning horizon, $T$ is the total number of steps, and $Delta_min$ is the minimum sub-optimality gap.
arXiv Detail & Related papers (2020-06-16T13:01:33Z) - Tight Quantum Lower Bound for Approximate Counting with Quantum States [49.6558487240078]
We prove tight lower bounds for the following variant of the counting problem considered by Aaronson, Kothari, Kretschmer, and Thaler ( 2020)
The task is to distinguish whether an input set $xsubseteq [n]$ has size either $k$ or $k'=(1+varepsilon)k$.
arXiv Detail & Related papers (2020-02-17T10:53:50Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.