Privacy Risk for anisotropic Langevin dynamics using relative entropy
bounds
- URL: http://arxiv.org/abs/2302.00766v2
- Date: Tue, 11 Jul 2023 22:13:52 GMT
- Title: Privacy Risk for anisotropic Langevin dynamics using relative entropy
bounds
- Authors: Anastasia Borovykh, Nikolas Kantas, Panos Parpas, Greg Pavliotis
- Abstract summary: We show how anisotropic noise can lead to better privacy-accuracy trade-offs.
We show how anisotropic noise can lead to better privacy-accuracy trade-offs.
- Score: 1.911678487931003
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The privacy preserving properties of Langevin dynamics with additive
isotropic noise have been extensively studied. However, the isotropic noise
assumption is very restrictive: (a) when adding noise to existing learning
algorithms to preserve privacy and maintain the best possible accuracy one
should take into account the relative magnitude of the outputs and their
correlations; (b) popular algorithms such as stochastic gradient descent (and
their continuous time limits) appear to possess anisotropic covariance
properties. To study the privacy risks for the anisotropic noise case, one
requires general results on the relative entropy between the laws of two
Stochastic Differential Equations with different drifts and diffusion
coefficients. Our main contribution is to establish such a bound using
stability estimates for solutions to the Fokker-Planck equations via functional
inequalities. With additional assumptions, the relative entropy bound implies
an $(\epsilon,\delta)$-differential privacy bound or translates to bounds on
the membership inference attack success and we show how anisotropic noise can
lead to better privacy-accuracy trade-offs. Finally, the benefits of
anisotropic noise are illustrated using numerical results in quadratic loss and
neural network setups.
Related papers
- Federated Nonparametric Hypothesis Testing with Differential Privacy Constraints: Optimal Rates and Adaptive Tests [5.3595271893779906]
Federated learning has attracted significant recent attention due to its applicability across a wide range of settings where data is collected and analyzed across disparate locations.
We study federated nonparametric goodness-of-fit testing in the white-noise-with-drift model under distributed differential privacy (DP) constraints.
arXiv Detail & Related papers (2024-06-10T19:25:19Z) - Towards stable real-world equation discovery with assessing
differentiating quality influence [52.2980614912553]
We propose alternatives to the commonly used finite differences-based method.
We evaluate these methods in terms of applicability to problems, similar to the real ones, and their ability to ensure the convergence of equation discovery algorithms.
arXiv Detail & Related papers (2023-11-09T23:32:06Z) - Amplitude-Varying Perturbation for Balancing Privacy and Utility in
Federated Learning [86.08285033925597]
This paper presents a new DP perturbation mechanism with a time-varying noise amplitude to protect the privacy of federated learning.
We derive an online refinement of the series to prevent FL from premature convergence resulting from excessive perturbation noise.
The contribution of the new DP mechanism to the convergence and accuracy of privacy-preserving FL is corroborated, compared to the state-of-the-art Gaussian noise mechanism with a persistent noise amplitude.
arXiv Detail & Related papers (2023-03-07T22:52:40Z) - Optimal scheduling of entropy regulariser for continuous-time
linear-quadratic reinforcement learning [9.779769486156631]
Herein agent interacts with the environment by generating noisy controls distributed according to the optimal relaxed policy.
This exploration-exploitation trade-off is determined by the strength of entropy regularisation.
We prove that the regret, for both learning algorithms, is of the order $mathcalO(sqrtN) $ (up to a logarithmic factor) over $N$ episodes, matching the best known result from the literature.
arXiv Detail & Related papers (2022-08-08T23:36:40Z) - High-Order Qubit Dephasing at Sweet Spots by Non-Gaussian Fluctuators:
Symmetry Breaking and Floquet Protection [55.41644538483948]
We study the qubit dephasing caused by the non-Gaussian fluctuators.
We predict a symmetry-breaking effect that is unique to the non-Gaussian noise.
arXiv Detail & Related papers (2022-06-06T18:02:38Z) - Clipped Stochastic Methods for Variational Inequalities with
Heavy-Tailed Noise [64.85879194013407]
We prove the first high-probability results with logarithmic dependence on the confidence level for methods for solving monotone and structured non-monotone VIPs.
Our results match the best-known ones in the light-tails case and are novel for structured non-monotone problems.
In addition, we numerically validate that the gradient noise of many practical formulations is heavy-tailed and show that clipping improves the performance of SEG/SGDA.
arXiv Detail & Related papers (2022-06-02T15:21:55Z) - Frequency estimation under non-Markovian spatially correlated quantum
noise: Restoring superclassical precision scaling [0.0]
We study the Ramsey estimation precision attainable by entanglement-enhanced interferometry in the presence of correlatedly non-classical noise.
In a paradigmatic case of spin-boson dephasovian noise from a thermal environment, we find that it is possible to suppress, on average, the effect of correlations by randomizing the location of probes.
arXiv Detail & Related papers (2022-04-22T16:25:16Z) - Analyzing and Improving the Optimization Landscape of Noise-Contrastive
Estimation [50.85788484752612]
Noise-contrastive estimation (NCE) is a statistically consistent method for learning unnormalized probabilistic models.
It has been empirically observed that the choice of the noise distribution is crucial for NCE's performance.
In this work, we formally pinpoint reasons for NCE's poor performance when an inappropriate noise distribution is used.
arXiv Detail & Related papers (2021-10-21T16:57:45Z) - High Probability Complexity Bounds for Non-Smooth Stochastic Optimization with Heavy-Tailed Noise [51.31435087414348]
It is essential to theoretically guarantee that algorithms provide small objective residual with high probability.
Existing methods for non-smooth convex optimization have complexity bounds with dependence on confidence level.
We propose novel stepsize rules for two methods with gradient clipping.
arXiv Detail & Related papers (2021-06-10T17:54:21Z) - On the Role of Entropy-based Loss for Learning Causal Structures with
Continuous Optimization [27.613220411996025]
A method with non-combinatorial directed acyclic constraint, called NOTEARS, formulates the causal structure learning problem as a continuous optimization problem using least-square loss.
We show that the violation of the Gaussian noise assumption will hinder the causal direction identification.
We propose a more general entropy-based loss that is theoretically consistent with the likelihood score under any noise distribution.
arXiv Detail & Related papers (2021-06-05T08:29:51Z) - Efficient choice of coloured noises in stochastic dynamics of open
quantum systems [0.0]
Liouville-von Neumann equation describes dynamics of reduced density matrix coupled to non-Markovian harmonic environment.
We present a number of schemes capable of generating coloured noises of this kind built on a noise amplitude reduction procedure.
We identify the scheme which performs best for the parameters used, improving convergence by orders of magnitude and increasing the time accessible by simulation.
arXiv Detail & Related papers (2020-06-02T18:23:34Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.