Privacy amplification and decoupling without smoothing
- URL: http://arxiv.org/abs/2105.05342v2
- Date: Sun, 9 Jan 2022 23:27:59 GMT
- Title: Privacy amplification and decoupling without smoothing
- Authors: Fr\'ed\'eric Dupuis
- Abstract summary: We prove an achievability result for privacy amplification and decoupling in terms of the sandwiched R'enyi entropy of order $alpha in (1,2$)
The fact that this proof works for $alpha$ close to 1 means that we can bypass the smooth min-entropy in the many applications where the bound comes from the fully quantum AEP or entropy accumulation.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We prove an achievability result for privacy amplification and decoupling in
terms of the sandwiched R\'enyi entropy of order $\alpha \in (1,2]$; this
extends previous results which worked for $\alpha=2$. The fact that this proof
works for $\alpha$ close to 1 means that we can bypass the smooth min-entropy
in the many applications where the bound comes from the fully quantum AEP or
entropy accumulation, and carry out the whole proof using the R\'enyi entropy,
thereby easily obtaining an error exponent for the final task. This effectively
replaces smoothing, which is a difficult high-dimensional optimization problem,
by an optimization problem over a single real parameter $\alpha$.
Related papers
- Stochastic Momentum Methods for Non-smooth Non-Convex Finite-Sum Coupled Compositional Optimization [64.99236464953032]
We propose a new state-of-the-art complexity of $O(/epsilon)$ for finding an (nearly) $'level KKT solution.<n>By applying our hinge-of-the-art complexity of $O(/epsilon)$ for finding an (nearly) $'level KKT solution, we achieve a new state-of-the-art complexity of $O(/epsilon)$ for finding an (nearly) $'level KKT solution.
arXiv Detail & Related papers (2025-06-03T06:31:59Z) - Sign Operator for Coping with Heavy-Tailed Noise: High Probability Convergence Bounds with Extensions to Distributed Optimization and Comparison Oracle [77.3806516979843]
We show that SignSGD achieves optimal sample complexity $tildeO(varepsilon-frac3kappa - 2kappa 1right)$ with high accuracy.
We also explore the application of the sign operator in zeroth-order optimization with an oracle that can only compare function values at two different points.
arXiv Detail & Related papers (2025-02-11T19:54:11Z) - A stochastic first-order method with multi-extrapolated momentum for highly smooth unconstrained optimization [3.8919212824749296]
We show that the proposed SFOM can accelerate optimization by exploiting the high-order smoothness of the objective function $f$.
To the best of our knowledge, this is the first SFOM to leverage arbitrary-order smoothness of the objective function for acceleration.
arXiv Detail & Related papers (2024-12-19T03:22:47Z) - Learning with Norm Constrained, Over-parameterized, Two-layer Neural Networks [54.177130905659155]
Recent studies show that a reproducing kernel Hilbert space (RKHS) is not a suitable space to model functions by neural networks.
In this paper, we study a suitable function space for over- parameterized two-layer neural networks with bounded norms.
arXiv Detail & Related papers (2024-04-29T15:04:07Z) - An Algorithm with Optimal Dimension-Dependence for Zero-Order Nonsmooth Nonconvex Stochastic Optimization [37.300102993926046]
We study the complexity of producing neither smooth nor convex points of Lipschitz objectives which are possibly using only zero-order evaluations.
Our analysis is based on a simple yet powerful.
Goldstein-subdifferential set, which allows recent advancements in.
nonsmooth non optimization.
arXiv Detail & Related papers (2023-07-10T11:56:04Z) - Achieving the Asymptotically Optimal Sample Complexity of Offline Reinforcement Learning: A DRO-Based Approach [36.88301225561535]
offline reinforcement learning aims to learn from pre-collected datasets without active exploration.
Existing approaches adopt a pessimistic stance towards uncertainty by penalizing rewards of under-explored state-action pairs to estimate value functions conservatively.
We show that the distributionally robust optimization (DRO) based approach can also address these challenges and is asymptotically minimax optimal
arXiv Detail & Related papers (2023-05-22T17:50:18Z) - Deterministic Nonsmooth Nonconvex Optimization [94.01526844386977]
We show that randomization is necessary to obtain a dimension-free dimension-free algorithm.
Our algorithm yields the first deterministic dimension-free algorithm for optimizing ReLU networks.
arXiv Detail & Related papers (2023-02-16T13:57:19Z) - Near-Optimal Non-Convex Stochastic Optimization under Generalized
Smoothness [21.865728815935665]
Two recent works established the $O(epsilon-3)$ sample complexity to obtain an $O(epsilon)$-stationary point.
However, both require a large batch size on the order of $mathrmploy(epsilon-1)$, which is not only computationally burdensome but also unsuitable for streaming applications.
In this work, we solve the prior two problems simultaneously by revisiting a simple variant of the STORM algorithm.
arXiv Detail & Related papers (2023-02-13T00:22:28Z) - A Newton-CG based barrier-augmented Lagrangian method for general nonconvex conic optimization [53.044526424637866]
In this paper we consider finding an approximate second-order stationary point (SOSP) that minimizes a twice different subject general non conic optimization.
In particular, we propose a Newton-CG based-augmentedconjugate method for finding an approximate SOSP.
arXiv Detail & Related papers (2023-01-10T20:43:29Z) - Private optimization in the interpolation regime: faster rates and
hardness results [9.405458160620533]
In non-private convex optimization, gradient methods converge much faster on problems -- problems where there exists a solution that simultaneously minimizes all of the sample losses -- than on non-interpolating ones.
We show (near) exponential improvements in the private sample complexity.
arXiv Detail & Related papers (2022-10-31T05:18:27Z) - Nonsmooth Nonconvex-Nonconcave Minimax Optimization: Primal-Dual
Balancing and Iteration Complexity Analysis [28.575516056239575]
We introduce a novel analysis for PLDA, the key are our newly developed nonsmooth and dual error iterations.
When $thetain [0,12]$, PLDA achieves the optimal $mathcalO()$ under mild assumptions.
arXiv Detail & Related papers (2022-09-22T07:12:48Z) - Oracle Complexity in Nonsmooth Nonconvex Optimization [49.088972349825085]
It is well-known that given a smooth, bounded-from-below $$stationary points, Oracle-based methods can find smooth approximation of smoothness.
In this paper, we prove an inherent trade-off between optimization and smoothing dimension.
arXiv Detail & Related papers (2021-04-14T10:42:45Z) - Hiding Among the Clones: A Simple and Nearly Optimal Analysis of Privacy
Amplification by Shuffling [49.43288037509783]
We show that random shuffling amplifies differential privacy guarantees of locally randomized data.
Our result is based on a new approach that is simpler than previous work and extends to approximate differential privacy with nearly the same guarantees.
arXiv Detail & Related papers (2020-12-23T17:07:26Z) - Curse of Dimensionality on Randomized Smoothing for Certifiable
Robustness [151.67113334248464]
We show that extending the smoothing technique to defend against other attack models can be challenging.
We present experimental results on CIFAR to validate our theory.
arXiv Detail & Related papers (2020-02-08T22:02:14Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.