Rethinking quantum smooth entropies: Tight one-shot analysis of quantum privacy amplification
- URL: http://arxiv.org/abs/2603.04493v1
- Date: Wed, 04 Mar 2026 19:00:01 GMT
- Title: Rethinking quantum smooth entropies: Tight one-shot analysis of quantum privacy amplification
- Authors: Bartosz Regula, Marco Tomamichel,
- Abstract summary: We introduce an improved one-shot characterisation of randomness extraction against quantum side information (privacy amplification)<n>Our main tool is a new class of smooth conditional entropies defined by lifting classical smooth divergences through measurements.<n>We show an approximate optimality of our results by giving a matching one-shot converse bound up to additive logarithmic terms.
- Score: 12.891210250935146
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We introduce an improved one-shot characterisation of randomness extraction against quantum side information (privacy amplification), strengthening known one-shot bounds and providing a unified derivation of the tightest known asymptotic constraints. Our main tool is a new class of smooth conditional entropies defined by lifting classical smooth divergences through measurements. For the key case of measured smooth Rényi divergence of order 2, we show that this can be alternatively understood as allowing for smoothing over not only states, but also non-positive Hermitian operators. Building on this, we establish a tightened leftover hash lemma, significantly improving over all known smooth min-entropy bounds on quantum privacy amplification and recovering the sharpest classical achievability results. We extend these methods to decoupling, the coherent analogue of randomness extraction, obtaining a corresponding improved one-shot bound. Relaxing our smooth entropy bounds leads to one-shot achievability results in terms of measured Rényi divergences, which in the asymptotic i.i.d. limit recover the state-of-the-art error exponent of [Dupuis, arXiv:2105.05342]. We show an approximate optimality of our results by giving a matching one-shot converse bound up to additive logarithmic terms. This yields an optimal second-order asymptotic expansion of privacy amplification under trace distance, establishing a significantly tighter one-shot achievability result than previously shown in [Shen et al., arXiv:2202.11590] and proving its optimality for all hash functions.
Related papers
- From Gradient Clipping to Normalization for Heavy Tailed SGD [19.369399536643773]
Recent empirical evidence indicates that machine learning applications involve heavy-tailed noise, which challenges the standard assumptions of bounded variance in practice.<n>In this paper, we show that it is possible to achieve tightness of the gradient-dependent noise convergence problem under tailed noise.
arXiv Detail & Related papers (2024-10-17T17:59:01Z) - Stochastic Zeroth-Order Optimization under Strongly Convexity and Lipschitz Hessian: Minimax Sample Complexity [59.75300530380427]
We consider the problem of optimizing second-order smooth and strongly convex functions where the algorithm is only accessible to noisy evaluations of the objective function it queries.
We provide the first tight characterization for the rate of the minimax simple regret by developing matching upper and lower bounds.
arXiv Detail & Related papers (2024-06-28T02:56:22Z) - Quantum speedups for stochastic optimization [18.32349609443295]
We consider the problem of minimizing a continuous function given quantumvitzvariance to an oracle.
We provide two new methods for the special case of minimizing a Lipsch avvitz function.
arXiv Detail & Related papers (2023-08-03T07:39:10Z) - Improved Quantum Algorithms for Fidelity Estimation [77.34726150561087]
We develop new and efficient quantum algorithms for fidelity estimation with provable performance guarantees.
Our algorithms use advanced quantum linear algebra techniques, such as the quantum singular value transformation.
We prove that fidelity estimation to any non-trivial constant additive accuracy is hard in general.
arXiv Detail & Related papers (2022-03-30T02:02:16Z) - Optimal Second-Order Rates for Quantum Soft Covering and Privacy
Amplification [19.624719072006936]
We study quantum soft covering and privacy amplification against quantum side information.
For both tasks, we use trace distance to measure the closeness between the processed state and the ideal target state.
Our results extend to the moderate deviation regime, which are the optimal rates when the trace distances vanish at sub-exponential speed.
arXiv Detail & Related papers (2022-02-23T16:02:31Z) - Tight Exponential Analysis for Smoothing the Max-Relative Entropy and
for Quantum Privacy Amplification [56.61325554836984]
The max-relative entropy together with its smoothed version is a basic tool in quantum information theory.
We derive the exact exponent for the decay of the small modification of the quantum state in smoothing the max-relative entropy based on purified distance.
arXiv Detail & Related papers (2021-11-01T16:35:41Z) - High-probability Bounds for Non-Convex Stochastic Optimization with
Heavy Tails [55.561406656549686]
We consider non- Hilbert optimization using first-order algorithms for which the gradient estimates may have tails.
We show that a combination of gradient, momentum, and normalized gradient descent convergence to critical points in high-probability with best-known iteration for smooth losses.
arXiv Detail & Related papers (2021-06-28T00:17:01Z) - High Probability Complexity Bounds for Non-Smooth Stochastic Optimization with Heavy-Tailed Noise [51.31435087414348]
It is essential to theoretically guarantee that algorithms provide small objective residual with high probability.
Existing methods for non-smooth convex optimization have complexity bounds with dependence on confidence level.
We propose novel stepsize rules for two methods with gradient clipping.
arXiv Detail & Related papers (2021-06-10T17:54:21Z) - Stochastic Optimization with Heavy-Tailed Noise via Accelerated Gradient
Clipping [69.9674326582747]
We propose a new accelerated first-order method called clipped-SSTM for smooth convex optimization with heavy-tailed distributed noise in gradients.
We prove new complexity that outperform state-of-the-art results in this case.
We derive the first non-trivial high-probability complexity bounds for SGD with clipping without light-tails assumption on the noise.
arXiv Detail & Related papers (2020-05-21T17:05:27Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.