Ticketed Learning-Unlearning Schemes
- URL: http://arxiv.org/abs/2306.15744v1
- Date: Tue, 27 Jun 2023 18:54:40 GMT
- Title: Ticketed Learning-Unlearning Schemes
- Authors: Badih Ghazi, Pritish Kamath, Ravi Kumar, Pasin Manurangsi, Ayush
Sekhari, Chiyuan Zhang
- Abstract summary: We propose a new ticketed model for learning--unlearning.
We provide space-efficient ticketed learning--unlearning schemes for a broad family of concept classes.
- Score: 57.89421552780526
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We consider the learning--unlearning paradigm defined as follows. First given
a dataset, the goal is to learn a good predictor, such as one minimizing a
certain loss. Subsequently, given any subset of examples that wish to be
unlearnt, the goal is to learn, without the knowledge of the original training
dataset, a good predictor that is identical to the predictor that would have
been produced when learning from scratch on the surviving examples.
We propose a new ticketed model for learning--unlearning wherein the learning
algorithm can send back additional information in the form of a small-sized
(encrypted) ``ticket'' to each participating training example, in addition to
retaining a small amount of ``central'' information for later. Subsequently,
the examples that wish to be unlearnt present their tickets to the unlearning
algorithm, which additionally uses the central information to return a new
predictor. We provide space-efficient ticketed learning--unlearning schemes for
a broad family of concept classes, including thresholds, parities,
intersection-closed classes, among others.
En route, we introduce the count-to-zero problem, where during unlearning,
the goal is to simply know if there are any examples that survived. We give a
ticketed learning--unlearning scheme for this problem that relies on the
construction of Sperner families with certain properties, which might be of
independent interest.
Related papers
- A Unified Framework for Neural Computation and Learning Over Time [56.44910327178975]
Hamiltonian Learning is a novel unified framework for learning with neural networks "over time"
It is based on differential equations that: (i) can be integrated without the need of external software solvers; (ii) generalize the well-established notion of gradient-based learning in feed-forward and recurrent networks; (iii) open to novel perspectives.
arXiv Detail & Related papers (2024-09-18T14:57:13Z) - An Information Theoretic Approach to Machine Unlearning [45.600917449314444]
Key challenge in unlearning is forgetting the necessary data in a timely manner, while preserving model performance.
In this work, we address the zero-shot unlearning scenario, whereby an unlearning algorithm must be able to remove data given only a trained model and the data to be forgotten.
We derive a simple but principled zero-shot unlearning method based on the geometry of the model.
arXiv Detail & Related papers (2024-02-02T13:33:30Z) - Learn to Unlearn for Deep Neural Networks: Minimizing Unlearning
Interference with Gradient Projection [56.292071534857946]
Recent data-privacy laws have sparked interest in machine unlearning.
Challenge is to discard information about the forget'' data without altering knowledge about remaining dataset.
We adopt a projected-gradient based learning method, named as Projected-Gradient Unlearning (PGU)
We provide empirically evidence to demonstrate that our unlearning method can produce models that behave similar to models retrained from scratch across various metrics even when the training dataset is no longer accessible.
arXiv Detail & Related papers (2023-12-07T07:17:24Z) - Towards Scalable Adaptive Learning with Graph Neural Networks and
Reinforcement Learning [0.0]
We introduce a flexible and scalable approach towards the problem of learning path personalization.
Our model is a sequential recommender system based on a graph neural network.
Our results demonstrate that it can learn to make good recommendations in the small-data regime.
arXiv Detail & Related papers (2023-05-10T18:16:04Z) - Learning to Unlearn: Instance-wise Unlearning for Pre-trained
Classifiers [71.70205894168039]
We consider instance-wise unlearning, of which the goal is to delete information on a set of instances from a pre-trained model.
We propose two methods that reduce forgetting on the remaining data: 1) utilizing adversarial examples to overcome forgetting at the representation-level and 2) leveraging weight importance metrics to pinpoint network parameters guilty of propagating unwanted information.
arXiv Detail & Related papers (2023-01-27T07:53:50Z) - A Survey of Learning on Small Data: Generalization, Optimization, and
Challenge [101.27154181792567]
Learning on small data that approximates the generalization ability of big data is one of the ultimate purposes of AI.
This survey follows the active sampling theory under a PAC framework to analyze the generalization error and label complexity of learning on small data.
Multiple data applications that may benefit from efficient small data representation are surveyed.
arXiv Detail & Related papers (2022-07-29T02:34:19Z) - Learning where to learn: Gradient sparsity in meta and continual
learning [4.845285139609619]
We show that meta-learning can be improved by letting the learning algorithm decide which weights to change.
We find that patterned sparsity emerges from this process, with the pattern of sparsity varying on a problem-by-problem basis.
Our results shed light on an ongoing debate on whether meta-learning can discover adaptable features and suggest that learning by sparse gradient descent is a powerful inductive bias for meta-learning systems.
arXiv Detail & Related papers (2021-10-27T12:54:36Z) - On the Necessity of Auditable Algorithmic Definitions for Machine
Unlearning [13.149070833843133]
Machine unlearning, i.e. having a model forget about some of its training data, has become increasingly important as privacy legislation promotes variants of the right-to-be-forgotten.
We first show that the definition that underlies approximate unlearning, which seeks to prove the approximately unlearned model is close to an exactly retrained model, is incorrect because one can obtain the same model using different datasets.
We then turn to exact unlearning approaches and ask how to verify their claims of unlearning.
arXiv Detail & Related papers (2021-10-22T16:16:56Z) - When is Memorization of Irrelevant Training Data Necessary for
High-Accuracy Learning? [53.523017945443115]
We describe natural prediction problems in which every sufficiently accurate training algorithm must encode, in the prediction model, essentially all the information about a large subset of its training examples.
Our results do not depend on the training algorithm or the class of models used for learning.
arXiv Detail & Related papers (2020-12-11T15:25:14Z) - Learning Half-Spaces and other Concept Classes in the Limit with
Iterative Learners [0.0]
We show that strongly non-U-shaped learning is restrictive for iterative learning from informant.
We also investigate the learnability of the concept class of half-spaces and provide a constructive iterative algorithm to learn the set of half-spaces from informant.
arXiv Detail & Related papers (2020-10-07T07:20:50Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.