Differential Privacy for Pairwise Learning: Non-convex Analysis
- URL: http://arxiv.org/abs/2105.03033v1
- Date: Fri, 7 May 2021 02:20:23 GMT
- Title: Differential Privacy for Pairwise Learning: Non-convex Analysis
- Authors: Yilin Kang, Yong Liu, Jian Li, Weiping Wang
- Abstract summary: Pairwise learning focuses on learning relationships with pairwise loss functions.
We analyze the privacy of pairwise learning and propose a new differential privacy paradigm for perturbations.
- Score: 16.815470316398326
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Pairwise learning focuses on learning tasks with pairwise loss functions,
which depend on pairs of training instances, and naturally fits for modeling
relationships between pairs of samples. In this paper, we focus on the privacy
of pairwise learning and propose a new differential privacy paradigm for
pairwise learning, based on gradient perturbation. We analyze the privacy
guarantees from two points of view: the $\ell_2$-sensitivity and the moments
accountant method. We further analyze the generalization error, the excess
empirical risk, and the excess population risk of our proposed method and give
corresponding bounds. By introducing algorithmic stability theory to pairwise
differential privacy, our theoretical analysis does not require convex pairwise
loss functions, which means that our method is general to both convex and
non-convex conditions. Under these circumstances, the utility bounds are better
than previous bounds under convexity or strongly convexity assumption, which is
an attractive result.
Related papers
- Likelihood Ratio Confidence Sets for Sequential Decision Making [51.66638486226482]
We revisit the likelihood-based inference principle and propose to use likelihood ratios to construct valid confidence sequences.
Our method is especially suitable for problems with well-specified likelihoods.
We show how to provably choose the best sequence of estimators and shed light on connections to online convex optimization.
arXiv Detail & Related papers (2023-11-08T00:10:21Z) - Fine-grained analysis of non-parametric estimation for pairwise learning [9.676007573960383]
We are concerned with the generalization performance of non-parametric estimation for pairwise learning.
Our results can be used to handle a wide range of pairwise learning problems including ranking, AUC, pairwise regression and metric and similarity learning.
arXiv Detail & Related papers (2023-05-31T08:13:14Z) - Optimal variance-reduced stochastic approximation in Banach spaces [114.8734960258221]
We study the problem of estimating the fixed point of a contractive operator defined on a separable Banach space.
We establish non-asymptotic bounds for both the operator defect and the estimation error.
arXiv Detail & Related papers (2022-01-21T02:46:57Z) - Efficient Performance Bounds for Primal-Dual Reinforcement Learning from
Demonstrations [1.0609815608017066]
We consider large-scale Markov decision processes with an unknown cost function and address the problem of learning a policy from a finite set of expert demonstrations.
Existing inverse reinforcement learning methods come with strong theoretical guarantees, but are computationally expensive.
We introduce a novel bilinear saddle-point framework using Lagrangian duality to bridge the gap between theory and practice.
arXiv Detail & Related papers (2021-12-28T05:47:24Z) - Learning Rates for Nonconvex Pairwise Learning [7.33244617309908]
Pairwise learning is receiving increasing attention since it improve many important learning tasks based on the size of the population.
Motivated nonwise learning provides improved learning rates.
arXiv Detail & Related papers (2021-11-09T16:12:20Z) - Adversarial Robustness with Semi-Infinite Constrained Learning [177.42714838799924]
Deep learning to inputs perturbations has raised serious questions about its use in safety-critical domains.
We propose a hybrid Langevin Monte Carlo training approach to mitigate this issue.
We show that our approach can mitigate the trade-off between state-of-the-art performance and robust robustness.
arXiv Detail & Related papers (2021-10-29T13:30:42Z) - High Probability Complexity Bounds for Non-Smooth Stochastic Optimization with Heavy-Tailed Noise [51.31435087414348]
It is essential to theoretically guarantee that algorithms provide small objective residual with high probability.
Existing methods for non-smooth convex optimization have complexity bounds with dependence on confidence level.
We propose novel stepsize rules for two methods with gradient clipping.
arXiv Detail & Related papers (2021-06-10T17:54:21Z) - Learner-Private Online Convex Optimization [24.204781914017758]
We study how to optimally obfuscate the learner's queries in first-order online convex optimization.
Our results apply to the significantly richer family of general convex functions with full-gradient feedback.
arXiv Detail & Related papers (2021-02-23T23:00:44Z) - Outlier-Robust Learning of Ising Models Under Dobrushin's Condition [57.89518300699042]
We study the problem of learning Ising models satisfying Dobrushin's condition in the outlier-robust setting where a constant fraction of the samples are adversarially corrupted.
Our main result is to provide the first computationally efficient robust learning algorithm for this problem with near-optimal error guarantees.
arXiv Detail & Related papers (2021-02-03T18:00:57Z) - Learning while Respecting Privacy and Robustness to Distributional
Uncertainties and Adversarial Data [66.78671826743884]
The distributionally robust optimization framework is considered for training a parametric model.
The objective is to endow the trained model with robustness against adversarially manipulated input data.
Proposed algorithms offer robustness with little overhead.
arXiv Detail & Related papers (2020-07-07T18:25:25Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.