Langevin Unlearning: A New Perspective of Noisy Gradient Descent for Machine Unlearning
- URL: http://arxiv.org/abs/2401.10371v5
- Date: Fri, 11 Oct 2024 22:13:41 GMT
- Title: Langevin Unlearning: A New Perspective of Noisy Gradient Descent for Machine Unlearning
- Authors: Eli Chien, Haoyu Wang, Ziang Chen, Pan Li,
- Abstract summary: Privacy is defined as statistical indistinguishability to retraining from scratch.
We propose Langevin unlearning, an unlearning framework based on a gradient descent.
- Score: 20.546589699647416
- License:
- Abstract: Machine unlearning has raised significant interest with the adoption of laws ensuring the ``right to be forgotten''. Researchers have provided a probabilistic notion of approximate unlearning under a similar definition of Differential Privacy (DP), where privacy is defined as statistical indistinguishability to retraining from scratch. We propose Langevin unlearning, an unlearning framework based on noisy gradient descent with privacy guarantees for approximate unlearning problems. Langevin unlearning unifies the DP learning process and the privacy-certified unlearning process with many algorithmic benefits. These include approximate certified unlearning for non-convex problems, complexity saving compared to retraining, sequential and batch unlearning for multiple unlearning requests.
Related papers
- A Closer Look at Machine Unlearning for Large Language Models [46.245404272612795]
Large language models (LLMs) may memorize sensitive or copyrighted content, raising privacy and legal concerns.
We discuss several issues in machine unlearning for LLMs and provide our insights on possible approaches.
arXiv Detail & Related papers (2024-10-10T16:56:05Z) - A Unified Framework for Neural Computation and Learning Over Time [56.44910327178975]
Hamiltonian Learning is a novel unified framework for learning with neural networks "over time"
It is based on differential equations that: (i) can be integrated without the need of external software solvers; (ii) generalize the well-established notion of gradient-based learning in feed-forward and recurrent networks; (iii) open to novel perspectives.
arXiv Detail & Related papers (2024-09-18T14:57:13Z) - Certified Machine Unlearning via Noisy Stochastic Gradient Descent [20.546589699647416]
Machine unlearning aims to efficiently remove the effect of certain data points on the trained model.
We propose to leverage noisy gradient descent for unlearning and establish its first approximate unlearning guarantee.
arXiv Detail & Related papers (2024-03-25T18:43:58Z) - Tight Bounds for Machine Unlearning via Differential Privacy [0.7252027234425334]
We consider the so-called "right to be forgotten" by requiring that a trained model should be able to "unlearn" a number of points from the training data.
We obtain tight bounds on the deletion capacity achievable by DP-based machine unlearning algorithms.
arXiv Detail & Related papers (2023-09-02T09:55:29Z) - Ticketed Learning-Unlearning Schemes [57.89421552780526]
We propose a new ticketed model for learning--unlearning.
We provide space-efficient ticketed learning--unlearning schemes for a broad family of concept classes.
arXiv Detail & Related papers (2023-06-27T18:54:40Z) - Resilient Constrained Learning [94.27081585149836]
This paper presents a constrained learning approach that adapts the requirements while simultaneously solving the learning task.
We call this approach resilient constrained learning after the term used to describe ecological systems that adapt to disruptions by modifying their operation.
arXiv Detail & Related papers (2023-06-04T18:14:18Z) - Differentially Private Stochastic Gradient Descent with Low-Noise [49.981789906200035]
Modern machine learning algorithms aim to extract fine-grained information from data to provide accurate predictions, which often conflicts with the goal of privacy protection.
This paper addresses the practical and theoretical importance of developing privacy-preserving machine learning algorithms that ensure good performance while preserving privacy.
arXiv Detail & Related papers (2022-09-09T08:54:13Z) - Stabilizing Q-learning with Linear Architectures for Provably Efficient
Learning [53.17258888552998]
This work proposes an exploration variant of the basic $Q$-learning protocol with linear function approximation.
We show that the performance of the algorithm degrades very gracefully under a novel and more permissive notion of approximation error.
arXiv Detail & Related papers (2022-06-01T23:26:51Z) - Efficient Differentially Private Secure Aggregation for Federated
Learning via Hardness of Learning with Errors [1.4680035572775534]
Federated machine learning leverages edge computing to develop models from network user data.
Privacy in federated learning remains a major challenge.
Recent advances in emphsecure aggregation using multiparty computation eliminate the need for a third party.
We present a new federated learning protocol that leverages a novel differentially private, malicious secure aggregation protocol.
arXiv Detail & Related papers (2021-12-13T18:31:08Z) - Simple Stochastic and Online Gradient DescentAlgorithms for Pairwise
Learning [65.54757265434465]
Pairwise learning refers to learning tasks where the loss function depends on a pair instances.
Online descent (OGD) is a popular approach to handle streaming data in pairwise learning.
In this paper, we propose simple and online descent to methods for pairwise learning.
arXiv Detail & Related papers (2021-11-23T18:10:48Z) - Differentially private cross-silo federated learning [16.38610531397378]
Strict privacy is of paramount importance in distributed machine learning.
In this paper we combine additively homomorphic secure summation protocols with differential privacy in the so-called cross-silo federated learning setting.
We demonstrate that our proposed solutions give prediction accuracy that is comparable to the non-distributed setting.
arXiv Detail & Related papers (2020-07-10T18:15:10Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.