Stability and Generalization of Differentially Private Minimax Problems
- URL: http://arxiv.org/abs/2204.04858v1
- Date: Mon, 11 Apr 2022 03:59:16 GMT
- Title: Stability and Generalization of Differentially Private Minimax Problems
- Authors: Yilin Kang, Yong Liu, Jian Li, Weiping Wang
- Abstract summary: We study the differential general minimax, combining the privacy generalization together with minimax optimization paradigm.
To the best our knowledge, this is the first time to analyze the generalization performance of general minimax paradigm.
- Score: 20.246768861331276
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In the field of machine learning, many problems can be formulated as the
minimax problem, including reinforcement learning, generative adversarial
networks, to just name a few. So the minimax problem has attracted a huge
amount of attentions from researchers in recent decades. However, there is
relatively little work on studying the privacy of the general minimax paradigm.
In this paper, we focus on the privacy of the general minimax setting,
combining differential privacy together with minimax optimization paradigm.
Besides, via algorithmic stability theory, we theoretically analyze the high
probability generalization performance of the differentially private minimax
algorithm under the strongly-convex-strongly-concave condition. To the best of
our knowledge, this is the first time to analyze the generalization performance
of general minimax paradigm, taking differential privacy into account.
Related papers
- Differentially Private Sliced Inverse Regression: Minimax Optimality and
Algorithm [16.14032140601778]
We propose optimally differentially private algorithms designed to address privacy concerns in the context of sufficient dimension reduction.
We develop differentially private algorithms that achieve the minimax lower bounds up to logarithmic factors.
As a natural extension, we can readily offer analogous lower and upper bounds for differentially private sparse principal component analysis.
arXiv Detail & Related papers (2024-01-16T06:47:43Z) - Score Attack: A Lower Bound Technique for Optimal Differentially Private
Learning [8.760651633031342]
We propose a novel approach called the score attack, which provides a lower bound on the differential-privacy-constrained minimax risk of parameter estimation.
It can optimally lower bound the minimax risk of estimating unknown model parameters, up to a logarithmic factor, while ensuring differential privacy for a range of statistical problems.
arXiv Detail & Related papers (2023-03-13T14:26:27Z) - Instance-Optimal Differentially Private Estimation [2.320417845168326]
We study local minimax convergence estimation rates subject to $epsilon$-differential privacy.
We show that optimal algorithms for simple hypothesis testing, namely the recent optimal private testers of Canonne et al., directly inform the design of locally minimax estimation algorithms.
arXiv Detail & Related papers (2022-10-28T01:08:01Z) - Differentially Private Stochastic Gradient Descent with Low-Noise [49.981789906200035]
Modern machine learning algorithms aim to extract fine-grained information from data to provide accurate predictions, which often conflicts with the goal of privacy protection.
This paper addresses the practical and theoretical importance of developing privacy-preserving machine learning algorithms that ensure good performance while preserving privacy.
arXiv Detail & Related papers (2022-09-09T08:54:13Z) - What is a Good Metric to Study Generalization of Minimax Learners? [24.577243536475233]
Minimax optimization has served as backbone of many machine learning (ML) problems.
How the solution trained on data performs on metric testing has been relatively underexplored.
We propose a new metric generalization minimax learners: the primal, to answer these issues.
arXiv Detail & Related papers (2022-06-09T13:39:06Z) - Minimax Optimization: The Case of Convex-Submodular [50.03984152441271]
Minimax problems extend beyond the continuous domain to mixed continuous-discrete domains or even fully discrete domains.
We introduce the class of convex-submodular minimax problems, where the objective is convex with respect to the continuous variable and submodular with respect to the discrete variable.
Our proposed algorithms are iterative and combine tools from both discrete and continuous optimization.
arXiv Detail & Related papers (2021-11-01T21:06:35Z) - Instance-optimality in optimal value estimation: Adaptivity via
variance-reduced Q-learning [99.34907092347733]
We analyze the problem of estimating optimal $Q$-value functions for a discounted Markov decision process with discrete states and actions.
Using a local minimax framework, we show that this functional arises in lower bounds on the accuracy on any estimation procedure.
In the other direction, we establish the sharpness of our lower bounds, up to factors logarithmic in the state and action spaces, by analyzing a variance-reduced version of $Q$-learning.
arXiv Detail & Related papers (2021-06-28T00:38:54Z) - Decentralized Personalized Federated Learning for Min-Max Problems [79.61785798152529]
This paper is the first to study PFL for saddle point problems encompassing a broader range of optimization problems.
We propose new algorithms to address this problem and provide a theoretical analysis of the smooth (strongly) convex-(strongly) concave saddle point problems.
Numerical experiments for bilinear problems and neural networks with adversarial noise demonstrate the effectiveness of the proposed methods.
arXiv Detail & Related papers (2021-06-14T10:36:25Z) - Stability and Generalization of Stochastic Gradient Methods for Minimax
Problems [71.60601421935844]
Many machine learning problems can be formulated as minimax problems such as Generative Adversarial Networks (GANs)
We provide a comprehensive generalization analysis of examples from training gradient methods for minimax problems.
arXiv Detail & Related papers (2021-05-08T22:38:00Z) - Train simultaneously, generalize better: Stability of gradient-based
minimax learners [12.691047660244331]
We show a key role in the performance of the trained minimax model under both convex concave and non-concave minimax settings.
We discuss several numerical results indicating the role of optimization algorithms in the generalization of learned minimax models.
arXiv Detail & Related papers (2020-10-23T17:44:43Z) - Non-convex Min-Max Optimization: Applications, Challenges, and Recent
Theoretical Advances [58.54078318403909]
The min-max problem, also known as the saddle point problem, is a class adversarial problem which is also studied in the context ofsum games.
arXiv Detail & Related papers (2020-06-15T05:33:42Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.