No-Regret Algorithms for Private Gaussian Process Bandit Optimization
- URL: http://arxiv.org/abs/2102.12467v1
- Date: Wed, 24 Feb 2021 18:52:24 GMT
- Title: No-Regret Algorithms for Private Gaussian Process Bandit Optimization
- Authors: Abhimanyu Dubey
- Abstract summary: We consider the ubiquitous problem of gaussian process (GP) bandit optimization from the lens of privacy-preserving statistics.
We propose a solution for differentially private GP bandit optimization that combines a uniform kernel approximator with random perturbations.
Our algorithms maintain differential privacy throughout the optimization procedure and critically do not rely explicitly on the sample path for prediction.
- Score: 13.660643701487002
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The widespread proliferation of data-driven decision-making has ushered in a
recent interest in the design of privacy-preserving algorithms. In this paper,
we consider the ubiquitous problem of gaussian process (GP) bandit optimization
from the lens of privacy-preserving statistics. We propose a solution for
differentially private GP bandit optimization that combines a uniform kernel
approximator with random perturbations, providing a generic framework to create
differentially-private (DP) Gaussian process bandit algorithms. For two
specific DP settings - joint and local differential privacy, we provide
algorithms based on efficient quadrature Fourier feature approximators, that
are computationally efficient and provably no-regret for popular stationary
kernel functions. Our algorithms maintain differential privacy throughout the
optimization procedure and critically do not rely explicitly on the sample path
for prediction, making the parameters straightforward to release as well.
Related papers
- Individualized Privacy Accounting via Subsampling with Applications in Combinatorial Optimization [55.81991984375959]
In this work, we give a new technique for analyzing individualized privacy accounting via the following simple observation.
We obtain several improved algorithms for private optimization problems, including decomposable submodular and set algorithm cover.
arXiv Detail & Related papers (2024-05-28T19:02:30Z) - Random Exploration in Bayesian Optimization: Order-Optimal Regret and
Computational Efficiency [18.17090625880964]
We study the methodology of exploring the domain using random samples drawn from a distribution.
We show that this random exploration approach achieves the optimal error rates.
arXiv Detail & Related papers (2023-10-23T20:30:44Z) - Dynamic Privacy Allocation for Locally Differentially Private Federated
Learning with Composite Objectives [10.528569272279999]
This paper proposes a differentially private federated learning algorithm for strongly convex but possibly nonsmooth problems.
The proposed algorithm adds artificial noise to the shared information to ensure privacy and dynamically allocates the time-varying noise variance to minimize an upper bound of the optimization error.
Numerical results show the superiority of the proposed algorithm over state-of-the-art methods.
arXiv Detail & Related papers (2023-08-02T13:30:33Z) - Private Networked Federated Learning for Nonsmooth Objectives [7.278228169713637]
This paper develops a networked federated learning algorithm to solve nonsmooth objective functions.
We use the zero-concentrated differential privacy notion (zCDP) to guarantee the confidentiality of the participants.
We provide complete theoretical proof for the privacy guarantees and the algorithm's convergence to the exact solution.
arXiv Detail & Related papers (2023-06-24T16:13:28Z) - Differentially Private Stochastic Gradient Descent with Low-Noise [49.981789906200035]
Modern machine learning algorithms aim to extract fine-grained information from data to provide accurate predictions, which often conflicts with the goal of privacy protection.
This paper addresses the practical and theoretical importance of developing privacy-preserving machine learning algorithms that ensure good performance while preserving privacy.
arXiv Detail & Related papers (2022-09-09T08:54:13Z) - Decentralized Stochastic Optimization with Inherent Privacy Protection [103.62463469366557]
Decentralized optimization is the basic building block of modern collaborative machine learning, distributed estimation and control, and large-scale sensing.
Since involved data, privacy protection has become an increasingly pressing need in the implementation of decentralized optimization algorithms.
arXiv Detail & Related papers (2022-05-08T14:38:23Z) - Optimal Algorithms for Mean Estimation under Local Differential Privacy [55.32262879188817]
We show that PrivUnit achieves the optimal variance among a large family of locally private randomizers.
We also develop a new variant of PrivUnit based on the Gaussian distribution which is more amenable to mathematical analysis and enjoys the same optimality guarantees.
arXiv Detail & Related papers (2022-05-05T06:43:46Z) - Regret Bounds for Expected Improvement Algorithms in Gaussian Process
Bandit Optimization [63.8557841188626]
The expected improvement (EI) algorithm is one of the most popular strategies for optimization under uncertainty.
We propose a variant of EI with a standard incumbent defined via the GP predictive mean.
We show that our algorithm converges, and achieves a cumulative regret bound of $mathcal O(gamma_TsqrtT)$.
arXiv Detail & Related papers (2022-03-15T13:17:53Z) - Local Differential Privacy for Bayesian Optimization [12.05395706770007]
We consider a black-box optimization in the nonparametric Gaussian process setting with local differential privacy (LDP) guarantee.
Specifically, the rewards from each user are further corrupted to protect privacy and the learner only has access to the corrupted rewards to minimize the regret.
We present three almost optimal algorithms based on the GP-UCB framework and Laplace DP mechanism.
arXiv Detail & Related papers (2020-10-13T21:50:09Z) - Private Stochastic Convex Optimization: Optimal Rates in Linear Time [74.47681868973598]
We study the problem of minimizing the population loss given i.i.d. samples from a distribution over convex loss functions.
A recent work of Bassily et al. has established the optimal bound on the excess population loss achievable given $n$ samples.
We describe two new techniques for deriving convex optimization algorithms both achieving the optimal bound on excess loss and using $O(minn, n2/d)$ gradient computations.
arXiv Detail & Related papers (2020-05-10T19:52:03Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.