Optimal Query Complexity of Secure Stochastic Convex Optimization
- URL: http://arxiv.org/abs/2104.01926v1
- Date: Mon, 5 Apr 2021 14:10:26 GMT
- Title: Optimal Query Complexity of Secure Stochastic Convex Optimization
- Authors: Wei Tang, Chien-Ju Ho, Yang Liu
- Abstract summary: We study the secure convex optimization problem.
A learner aims to learn the optimal point of a convex function through sequentially querying a (stochastic) gradient.
In the meantime, there exists an adversary who aims to free-ride and infer the learning outcome of the learner from observing the learner's queries.
- Score: 17.35823773611766
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We study the secure stochastic convex optimization problem. A learner aims to
learn the optimal point of a convex function through sequentially querying a
(stochastic) gradient oracle. In the meantime, there exists an adversary who
aims to free-ride and infer the learning outcome of the learner from observing
the learner's queries. The adversary observes only the points of the queries
but not the feedback from the oracle. The goal of the learner is to optimize
the accuracy, i.e., obtaining an accurate estimate of the optimal point, while
securing her privacy, i.e., making it difficult for the adversary to infer the
optimal point. We formally quantify this tradeoff between learner's accuracy
and privacy and characterize the lower and upper bounds on the learner's query
complexity as a function of desired levels of accuracy and privacy. For the
analysis of lower bounds, we provide a general template based on information
theoretical analysis and then tailor the template to several families of
problems, including stochastic convex optimization and (noisy) binary search.
We also present a generic secure learning protocol that achieves the matching
upper bound up to logarithmic factors.
Related papers
- Accelerated zero-order SGD under high-order smoothness and overparameterized regime [79.85163929026146]
We present a novel gradient-free algorithm to solve convex optimization problems.
Such problems are encountered in medicine, physics, and machine learning.
We provide convergence guarantees for the proposed algorithm under both types of noise.
arXiv Detail & Related papers (2024-11-21T10:26:17Z) - Stochastic Zeroth-Order Optimization under Strongly Convexity and Lipschitz Hessian: Minimax Sample Complexity [59.75300530380427]
We consider the problem of optimizing second-order smooth and strongly convex functions where the algorithm is only accessible to noisy evaluations of the objective function it queries.
We provide the first tight characterization for the rate of the minimax simple regret by developing matching upper and lower bounds.
arXiv Detail & Related papers (2024-06-28T02:56:22Z) - Learning-Rate-Free Stochastic Optimization over Riemannian Manifolds [1.6385815610837167]
In this work, we introduce innovative learning-rate-free algorithms for optimization over Riemannian.
We establish high probability convergence guarantees that are optimal, up to logarithmic factors, compared to the best-known optimally tuned rate in the deterministic setting.
Our approach is validated through numerical experiments, demonstrating competitive performance against learning-rate-dependent algorithms.
arXiv Detail & Related papers (2024-06-04T13:17:24Z) - Smoothed Analysis of Sequential Probability Assignment [16.090378928208885]
We study information-theoretically optimal minmax rates as well as a framework for algorithmic reduction involving the maximum likelihood estimator oracle.
Our approach establishes a general-purpose reduction from minimax rates for sequential probability assignment for smoothed adversaries to minimax rates for transductive learning.
arXiv Detail & Related papers (2023-03-08T19:25:57Z) - Smoothed Online Learning is as Easy as Statistical Learning [77.00766067963195]
We provide the first oracle-efficient, no-regret algorithms in this setting.
We show that if a function class is learnable in the classical setting, then there is an oracle-efficient, no-regret algorithm for contextual bandits.
arXiv Detail & Related papers (2022-02-09T19:22:34Z) - High Probability Complexity Bounds for Non-Smooth Stochastic Optimization with Heavy-Tailed Noise [51.31435087414348]
It is essential to theoretically guarantee that algorithms provide small objective residual with high probability.
Existing methods for non-smooth convex optimization have complexity bounds with dependence on confidence level.
We propose novel stepsize rules for two methods with gradient clipping.
arXiv Detail & Related papers (2021-06-10T17:54:21Z) - Lower Bounds on Cross-Entropy Loss in the Presence of Test-time
Adversaries [33.53470955144013]
In this paper, we determine optimal lower bounds on the cross-entropy loss in the presence of test-time adversaries, along with the corresponding optimal classification outputs.
We also propose and provide a proof of correctness for a bespoke algorithm to compute this lower bound efficiently.
We use our lower bounds as a diagnostic tool to determine the effectiveness of current robust training methods and find a gap from optimality at larger budgets.
arXiv Detail & Related papers (2021-04-16T21:41:28Z) - Learner-Private Online Convex Optimization [24.204781914017758]
We study how to optimally obfuscate the learner's queries in first-order online convex optimization.
Our results apply to the significantly richer family of general convex functions with full-gradient feedback.
arXiv Detail & Related papers (2021-02-23T23:00:44Z) - Online Model Selection for Reinforcement Learning with Function
Approximation [50.008542459050155]
We present a meta-algorithm that adapts to the optimal complexity with $tildeO(L5/6 T2/3)$ regret.
We also show that the meta-algorithm automatically admits significantly improved instance-dependent regret bounds.
arXiv Detail & Related papers (2020-11-19T10:00:54Z) - Adaptive Sampling for Best Policy Identification in Markov Decision
Processes [79.4957965474334]
We investigate the problem of best-policy identification in discounted Markov Decision (MDPs) when the learner has access to a generative model.
The advantages of state-of-the-art algorithms are discussed and illustrated.
arXiv Detail & Related papers (2020-09-28T15:22:24Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.