Learner-Private Online Convex Optimization
- URL: http://arxiv.org/abs/2102.11976v1
- Date: Tue, 23 Feb 2021 23:00:44 GMT
- Title: Learner-Private Online Convex Optimization
- Authors: Jiaming Xu, Kuang Xu and Dana Yang
- Abstract summary: We study how to optimally obfuscate the learner's queries in first-order online convex optimization.
Our results apply to the significantly richer family of general convex functions with full-gradient feedback.
- Score: 24.204781914017758
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Online convex optimization is a framework where a learner sequentially
queries an external data source in order to arrive at the optimal solution of a
convex function. The paradigm has gained significant popularity recently thanks
to its scalability in large-scale optimization and machine learning. The
repeated interactions, however, expose the learner to privacy risks from
eavesdropping adversary that observe the submitted queries. In this paper, we
study how to optimally obfuscate the learner's queries in first-order online
convex optimization, so that their learned optimal value is provably difficult
to estimate for the eavesdropping adversary. We consider two formulations of
learner privacy: a Bayesian formulation in which the convex function is drawn
randomly, and a minimax formulation in which the function is fixed and the
adversary's probability of error is measured with respect to a minimax
criterion. We show that, if the learner wants to ensure the probability of
accurate prediction by the adversary be kept below $1/L$, then the overhead in
query complexity is additive in $L$ in the minimax formulation, but
multiplicative in $L$ in the Bayesian formulation. Compared to existing
learner-private sequential learning models with binary feedback, our results
apply to the significantly richer family of general convex functions with
full-gradient feedback. Our proofs are largely enabled by tools from the theory
of Dirichlet processes, as well as more sophisticated lines of analysis aimed
at measuring the amount of information leakage under a full-gradient oracle.
Related papers
- Forecasting Outside the Box: Application-Driven Optimal Pointwise Forecasts for Stochastic Optimization [0.0]
We present an integrated learning and optimization procedure that yields the best approximation of an unknown situation.
Numerical results conducted with inventory problems from the literature as well as a bike-sharing problem with real data demonstrate that the proposed approach performs well.
arXiv Detail & Related papers (2024-11-05T21:54:50Z) - Stochastic Zeroth-Order Optimization under Strongly Convexity and Lipschitz Hessian: Minimax Sample Complexity [59.75300530380427]
We consider the problem of optimizing second-order smooth and strongly convex functions where the algorithm is only accessible to noisy evaluations of the objective function it queries.
We provide the first tight characterization for the rate of the minimax simple regret by developing matching upper and lower bounds.
arXiv Detail & Related papers (2024-06-28T02:56:22Z) - LP++: A Surprisingly Strong Linear Probe for Few-Shot CLIP [20.86307407685542]
Linear Probe (LP) has been often reported as a weak baseline for few-shot CLIP adaptation.
In this work, we examine from convex-optimization perspectives a generalization of the standard LP baseline.
Our image-language objective function, along with these non-trivial optimization insights and ingredients, yields, surprisingly, highly competitive few-shot CLIP performances.
arXiv Detail & Related papers (2024-04-02T20:23:10Z) - Non-Convex Robust Hypothesis Testing using Sinkhorn Uncertainty Sets [18.46110328123008]
We present a new framework to address the non-robust hypothesis testing problem.
The goal is to seek the optimal detector that minimizes the maximum numerical risk.
arXiv Detail & Related papers (2024-03-21T20:29:43Z) - Optimal Multi-Distribution Learning [88.3008613028333]
Multi-distribution learning seeks to learn a shared model that minimizes the worst-case risk across $k$ distinct data distributions.
We propose a novel algorithm that yields an varepsilon-optimal randomized hypothesis with a sample complexity on the order of (d+k)/varepsilon2.
arXiv Detail & Related papers (2023-12-08T16:06:29Z) - Universal Online Learning with Gradient Variations: A Multi-layer Online Ensemble Approach [57.92727189589498]
We propose an online convex optimization approach with two different levels of adaptivity.
We obtain $mathcalO(log V_T)$, $mathcalO(d log V_T)$ and $hatmathcalO(sqrtV_T)$ regret bounds for strongly convex, exp-concave and convex loss functions.
arXiv Detail & Related papers (2023-07-17T09:55:35Z) - High Probability Complexity Bounds for Non-Smooth Stochastic Optimization with Heavy-Tailed Noise [51.31435087414348]
It is essential to theoretically guarantee that algorithms provide small objective residual with high probability.
Existing methods for non-smooth convex optimization have complexity bounds with dependence on confidence level.
We propose novel stepsize rules for two methods with gradient clipping.
arXiv Detail & Related papers (2021-06-10T17:54:21Z) - Differential Privacy for Pairwise Learning: Non-convex Analysis [16.815470316398326]
Pairwise learning focuses on learning relationships with pairwise loss functions.
We analyze the privacy of pairwise learning and propose a new differential privacy paradigm for perturbations.
arXiv Detail & Related papers (2021-05-07T02:20:23Z) - Optimal Query Complexity of Secure Stochastic Convex Optimization [17.35823773611766]
We study the secure convex optimization problem.
A learner aims to learn the optimal point of a convex function through sequentially querying a (stochastic) gradient.
In the meantime, there exists an adversary who aims to free-ride and infer the learning outcome of the learner from observing the learner's queries.
arXiv Detail & Related papers (2021-04-05T14:10:26Z) - Learning with User-Level Privacy [61.62978104304273]
We analyze algorithms to solve a range of learning tasks under user-level differential privacy constraints.
Rather than guaranteeing only the privacy of individual samples, user-level DP protects a user's entire contribution.
We derive an algorithm that privately answers a sequence of $K$ adaptively chosen queries with privacy cost proportional to $tau$, and apply it to solve the learning tasks we consider.
arXiv Detail & Related papers (2021-02-23T18:25:13Z) - Provably Efficient Reward-Agnostic Navigation with Linear Value
Iteration [143.43658264904863]
We show how iteration under a more standard notion of low inherent Bellman error, typically employed in least-square value-style algorithms, can provide strong PAC guarantees on learning a near optimal value function.
We present a computationally tractable algorithm for the reward-free setting and show how it can be used to learn a near optimal policy for any (linear) reward function.
arXiv Detail & Related papers (2020-08-18T04:34:21Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.