Machine Unlearning via Algorithmic Stability
- URL: http://arxiv.org/abs/2102.13179v1
- Date: Thu, 25 Feb 2021 21:16:56 GMT
- Title: Machine Unlearning via Algorithmic Stability
- Authors: Enayat Ullah, Tung Mai, Anup Rao, Ryan Rossi, Raman Arora
- Abstract summary: We study the problem of machine unlearning and identify a notion of Total Variation (TV) stability.
For convex risk minimization problems, we design TV-stable algorithms based on noisy Gradient Descent (SGD)
Our techniques to generalize our algorithms are differentially private as well.
- Score: 31.809738694273623
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: We study the problem of machine unlearning and identify a notion of
algorithmic stability, Total Variation (TV) stability, which we argue, is
suitable for the goal of exact unlearning. For convex risk minimization
problems, we design TV-stable algorithms based on noisy Stochastic Gradient
Descent (SGD). Our key contribution is the design of corresponding efficient
unlearning algorithms, which are based on constructing a (maximal) coupling of
Markov chains for the noisy SGD procedure. To understand the trade-offs between
accuracy and unlearning efficiency, we give upper and lower bounds on excess
empirical and populations risk of TV stable algorithms for convex risk
minimization. Our techniques generalize to arbitrary non-convex functions, and
our algorithms are differentially private as well.
Related papers
- Stability and Generalization of the Decentralized Stochastic Gradient
Descent Ascent Algorithm [80.94861441583275]
We investigate the complexity of the generalization bound of the decentralized gradient descent (D-SGDA) algorithm.
Our results analyze the impact of different top factors on the generalization of D-SGDA.
We also balance it with the generalization to obtain the optimal convex-concave setting.
arXiv Detail & Related papers (2023-10-31T11:27:01Z) - Stability and Generalization of Stochastic Compositional Gradient
Descent Algorithms [61.59448949684493]
We provide the stability and generalization analysis of compositional descent algorithms built from training examples.
We establish the uniform stability results for two popular compositional gradient descent algorithms, namely SCGD and SCSC.
We derive-independent excess risk bounds for SCGD and SCSC by trade-offing their stability results and optimization errors.
arXiv Detail & Related papers (2023-07-07T02:40:09Z) - Differentially Private Stochastic Gradient Descent with Low-Noise [49.981789906200035]
Modern machine learning algorithms aim to extract fine-grained information from data to provide accurate predictions, which often conflicts with the goal of privacy protection.
This paper addresses the practical and theoretical importance of developing privacy-preserving machine learning algorithms that ensure good performance while preserving privacy.
arXiv Detail & Related papers (2022-09-09T08:54:13Z) - Log Barriers for Safe Black-box Optimization with Application to Safe
Reinforcement Learning [72.97229770329214]
We introduce a general approach for seeking high dimensional non-linear optimization problems in which maintaining safety during learning is crucial.
Our approach called LBSGD is based on applying a logarithmic barrier approximation with a carefully chosen step size.
We demonstrate the effectiveness of our approach on minimizing violation in policy tasks in safe reinforcement learning.
arXiv Detail & Related papers (2022-07-21T11:14:47Z) - Projection-free Constrained Stochastic Nonconvex Optimization with
State-dependent Markov Data [0.0]
We show a projection-free gradient-type algorithm for constrained non- Markovian case chain problems with Markovian data.
We also empirically demonstrate the performance of our algorithm on the problem of strategic classification with neural networks.
arXiv Detail & Related papers (2022-06-22T19:43:15Z) - Online Robust Reinforcement Learning with Model Uncertainty [24.892994430374912]
We develop a sample-based approach to estimate the unknown uncertainty set and design a robust Q-learning and robust TDC algorithm.
For the robust Q-learning algorithm, we prove that it converges to the optimal robust Q function, and for the robust TDC algorithm, we prove that it converges to some stationary points.
Our approach can be readily extended to robustify many other algorithms, e.g., TD, SARSA, and other GTD algorithms.
arXiv Detail & Related papers (2021-09-29T16:17:47Z) - A proximal-proximal majorization-minimization algorithm for nonconvex
tuning-free robust regression problems [4.261680642170457]
We introduce proximal-proximal majorization-minimization (PPMM) algorithm for non regression problems.
Our proposed algorithm outperforms the existing state-of-the-art algorithms.
arXiv Detail & Related papers (2021-06-25T15:07:13Z) - Stochastic Hard Thresholding Algorithms for AUC Maximization [49.00683387735522]
We develop a hard thresholding algorithm for AUC in distributiond classification.
We conduct experiments to show the efficiency and effectiveness of the proposed algorithms.
arXiv Detail & Related papers (2020-11-04T16:49:29Z) - A Stochastic Subgradient Method for Distributionally Robust Non-Convex
Learning [2.007262412327553]
robustness is with respect to uncertainty in the underlying data distribution.
We show that our technique converges to satisfying perturbationity conditions.
We also illustrate the performance of our algorithm on real datasets.
arXiv Detail & Related papers (2020-06-08T18:52:40Z) - Robust Reinforcement Learning with Wasserstein Constraint [49.86490922809473]
We show the existence of optimal robust policies, provide a sensitivity analysis for the perturbations, and then design a novel robust learning algorithm.
The effectiveness of the proposed algorithm is verified in the Cart-Pole environment.
arXiv Detail & Related papers (2020-06-01T13:48:59Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.