Differential Privacy in Personalized Pricing with Nonparametric Demand
Models
- URL: http://arxiv.org/abs/2109.04615v1
- Date: Fri, 10 Sep 2021 01:56:55 GMT
- Title: Differential Privacy in Personalized Pricing with Nonparametric Demand
Models
- Authors: Xi Chen, Sentao Miao, Yining Wang
- Abstract summary: This paper studies a dynamic personalized pricing problem with textitunknown nonparametric demand models under data privacy protection.
Two concepts of data privacy, which have been widely applied in practices, are introduced.
We develop two algorithms which make pricing decisions and learn the unknown demand on the fly, while satisfying the CDP and LDP gurantees respectively.
- Score: 15.036147440342338
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: In the recent decades, the advance of information technology and abundant
personal data facilitate the application of algorithmic personalized pricing.
However, this leads to the growing concern of potential violation of privacy
due to adversarial attack. To address the privacy issue, this paper studies a
dynamic personalized pricing problem with \textit{unknown} nonparametric demand
models under data privacy protection. Two concepts of data privacy, which have
been widely applied in practices, are introduced: \textit{central differential
privacy (CDP)} and \textit{local differential privacy (LDP)}, which is proved
to be stronger than CDP in many cases. We develop two algorithms which make
pricing decisions and learn the unknown demand on the fly, while satisfying the
CDP and LDP gurantees respectively. In particular, for the algorithm with CDP
guarantee, the regret is proved to be at most $\tilde
O(T^{(d+2)/(d+4)}+\varepsilon^{-1}T^{d/(d+4)})$. Here, the parameter $T$
denotes the length of the time horizon, $d$ is the dimension of the
personalized information vector, and the key parameter $\varepsilon>0$ measures
the strength of privacy (smaller $\varepsilon$ indicates a stronger privacy
protection). On the other hand, for the algorithm with LDP guarantee, its
regret is proved to be at most $\tilde
O(\varepsilon^{-2/(d+2)}T^{(d+1)/(d+2)})$, which is near-optimal as we prove a
lower bound of $\Omega(\varepsilon^{-2/(d+2)}T^{(d+1)/(d+2)})$ for any
algorithm with LDP guarantee.
Related papers
- Differentially Private Best-Arm Identification [14.916947598339988]
Best Arm Identification (BAI) problems are progressively used for data-sensitive applications.
Motivated by the data privacy concerns invoked by these applications, we study the problem of BAI with fixed confidence in both the local and central models.
arXiv Detail & Related papers (2024-06-10T16:02:48Z) - Private Mean Estimation with Person-Level Differential Privacy [6.621676316292624]
We study person-level differentially private mean estimation in the case where each person holds multiple samples.
We give computationally efficient algorithms under approximate-DP and computationally inefficient algorithms under pure DP, and our nearly matching lower bounds hold for the most permissive case of approximate DP.
arXiv Detail & Related papers (2024-05-30T18:20:35Z) - Smooth Sensitivity for Geo-Privacy [17.835910182900985]
Local model of differential privacy (LDP) is the predominant model under which the problem has been studied.
Geo-Privacy (GP) stipulates that the level of distinguishability be proportional to $mathrmdist(x_i, x_i')$.
We generalize the smooth sensitivity framework from Differential Privacy to Geo-Privacy, which allows us to add noise tailored to the hardness of the given instance.
arXiv Detail & Related papers (2024-05-10T08:32:07Z) - A Generalized Shuffle Framework for Privacy Amplification: Strengthening Privacy Guarantees and Enhancing Utility [4.7712438974100255]
We show how to shuffle $(epsilon_i,delta_i)$-PLDP setting with personalized privacy parameters.
We prove that shuffled $(epsilon_i,delta_i)$-PLDP process approximately preserves $mu$-Gaussian Differential Privacy with mu = sqrtfrac2sum_i=1n frac1-delta_i1+eepsilon_i-max_ifrac1-delta_i1+e
arXiv Detail & Related papers (2023-12-22T02:31:46Z) - Near-Optimal Differentially Private Reinforcement Learning [16.871660060209674]
We study online exploration in reinforcement learning with differential privacy constraints.
Existing work established that no-regret learning is possible under joint differential privacy (JDP) and local differential privacy (LDP)
We design an $epsilon$-JDP algorithm with a regret of $widetildeO(sqrtSAH2T+S2AH3/epsilon)$ which matches the information-theoretic lower bound of non-private learning.
arXiv Detail & Related papers (2022-12-09T06:03:02Z) - Analyzing Privacy Leakage in Machine Learning via Multiple Hypothesis
Testing: A Lesson From Fano [83.5933307263932]
We study data reconstruction attacks for discrete data and analyze it under the framework of hypothesis testing.
We show that if the underlying private data takes values from a set of size $M$, then the target privacy parameter $epsilon$ can be $O(log M)$ before the adversary gains significant inferential power.
arXiv Detail & Related papers (2022-10-24T23:50:12Z) - Private Online Prediction from Experts: Separations and Faster Rates [74.52487417350221]
Online prediction from experts is a fundamental problem in machine learning and several works have studied this problem under privacy constraints.
We propose and analyze new algorithms for this problem that improve over the regret bounds of the best existing algorithms for non-adaptive adversaries.
arXiv Detail & Related papers (2022-10-24T18:40:19Z) - Normalized/Clipped SGD with Perturbation for Differentially Private
Non-Convex Optimization [94.06564567766475]
DP-SGD and DP-NSGD mitigate the risk of large models memorizing sensitive training data.
We show that these two algorithms achieve similar best accuracy while DP-NSGD is comparatively easier to tune than DP-SGD.
arXiv Detail & Related papers (2022-06-27T03:45:02Z) - Individual Privacy Accounting for Differentially Private Stochastic Gradient Descent [69.14164921515949]
We characterize privacy guarantees for individual examples when releasing models trained by DP-SGD.
We find that most examples enjoy stronger privacy guarantees than the worst-case bound.
This implies groups that are underserved in terms of model utility simultaneously experience weaker privacy guarantees.
arXiv Detail & Related papers (2022-06-06T13:49:37Z) - Locally Differentially Private Reinforcement Learning for Linear Mixture
Markov Decision Processes [78.27542864367821]
Reinforcement learning (RL) algorithms can be used to provide personalized services, which rely on users' private and sensitive data.
To protect the users' privacy, privacy-preserving RL algorithms are in demand.
We propose a novel $(varepsilon, delta)$-LDP algorithm for learning a class of Markov decision processes (MDPs) dubbed linear mixture MDPs.
arXiv Detail & Related papers (2021-10-19T17:44:09Z) - Learning with User-Level Privacy [61.62978104304273]
We analyze algorithms to solve a range of learning tasks under user-level differential privacy constraints.
Rather than guaranteeing only the privacy of individual samples, user-level DP protects a user's entire contribution.
We derive an algorithm that privately answers a sequence of $K$ adaptively chosen queries with privacy cost proportional to $tau$, and apply it to solve the learning tasks we consider.
arXiv Detail & Related papers (2021-02-23T18:25:13Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.