Continual Learning and Private Unlearning
- URL: http://arxiv.org/abs/2203.12817v1
- Date: Thu, 24 Mar 2022 02:40:33 GMT
- Title: Continual Learning and Private Unlearning
- Authors: Bo Liu, Qiang Liu, Peter Stone
- Abstract summary: This paper formalizes the continual learning and private unlearning (CLPU) problem.
It introduces a straightforward but exactly private solution, CLPU-DER++, as the first step towards solving the CLPU problem.
- Score: 49.848423659220444
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: As intelligent agents become autonomous over longer periods of time, they may
eventually become lifelong counterparts to specific people. If so, it may be
common for a user to want the agent to master a task temporarily but later on
to forget the task due to privacy concerns. However enabling an agent to
\emph{forget privately} what the user specified without degrading the rest of
the learned knowledge is a challenging problem. With the aim of addressing this
challenge, this paper formalizes this continual learning and private unlearning
(CLPU) problem. The paper further introduces a straightforward but exactly
private solution, CLPU-DER++, as the first step towards solving the CLPU
problem, along with a set of carefully designed benchmark problems to evaluate
the effectiveness of the proposed solution.
Related papers
- Eliciting Problem Specifications via Large Language Models [4.055489363682198]
Large language models (LLMs) can be utilized to map a problem class into a semi-formal specification.
A cognitive system can then use the problem-space specification to solve multiple instances of problems from the problem class.
arXiv Detail & Related papers (2024-05-20T16:19:02Z) - Improving Socratic Question Generation using Data Augmentation and Preference Optimization [2.1485350418225244]
Large language models (LLMs) can be used to augment human effort by automatically generating Socratic questions for students.
Existing methods that involve prompting these LLMs sometimes produce invalid outputs.
We propose a data augmentation method to enrich existing Socratic questioning datasets with questions that are invalid in specific ways.
Next, we propose a method to optimize open-source LLMs such as LLama 2 to prefer ground-truth questions over generated invalid ones.
arXiv Detail & Related papers (2024-03-01T00:08:20Z) - DP-OPT: Make Large Language Model Your Privacy-Preserving Prompt Engineer [57.04801796205638]
Large Language Models (LLMs) have emerged as dominant tools for various tasks.
However, concerns surrounding data privacy present obstacles due to the tuned prompts' dependency on sensitive private information.
We present Differentially-Private Offsite Prompt Tuning (DP-OPT) to address this challenge.
arXiv Detail & Related papers (2023-11-27T02:01:10Z) - POP: Prompt Of Prompts for Continual Learning [59.15888651733645]
Continual learning (CL) aims to mimic the human ability to learn new concepts without catastrophic forgetting.
We show that a foundation model equipped with POP learning is able to outperform classic CL methods by a significant margin.
arXiv Detail & Related papers (2023-06-14T02:09:26Z) - Towards Skilled Population Curriculum for Multi-Agent Reinforcement
Learning [42.540853953923495]
We introduce a novel automatic curriculum learning framework, Skilled Population Curriculum (SPC), which adapts curriculum learning to multi-agent coordination.
Specifically, we endow the student with population-invariant communication and a hierarchical skill set, allowing it to learn cooperation and behavior skills from distinct tasks with varying numbers of agents.
We also analyze the inherent non-stationarity of this multi-agent automatic curriculum teaching problem and provide a corresponding regret bound.
arXiv Detail & Related papers (2023-02-07T12:30:52Z) - You Only Live Once: Single-Life Reinforcement Learning [124.1738675154651]
In many real-world situations, the goal might not be to learn a policy that can do the task repeatedly, but simply to perform a new task successfully once in a single trial.
We formalize this problem setting, where an agent must complete a task within a single episode without interventions.
We propose an algorithm, $Q$-weighted adversarial learning (QWALE), which employs a distribution matching strategy.
arXiv Detail & Related papers (2022-10-17T09:00:11Z) - Knowledge acquisition via interactive Distributed Cognitive skill
Modules [0.0]
The human's cognitive capacity for problem solving is always limited to his/her educational background, skills, experiences, etc.
This work aims to introduce an early stage of a modular approach to procedural skill acquisition and storage via distributed cognitive skill modules.
arXiv Detail & Related papers (2022-10-13T01:41:11Z) - Learning with Recoverable Forgetting [77.56338597012927]
Learning wIth Recoverable Forgetting explicitly handles the task- or sample-specific knowledge removal and recovery.
Specifically, LIRF brings in two innovative schemes, namely knowledge deposit and withdrawal.
We conduct experiments on several datasets, and demonstrate that the proposed LIRF strategy yields encouraging results with gratifying generalization capability.
arXiv Detail & Related papers (2022-07-17T16:42:31Z) - Probably Approximately Correct Constrained Learning [135.48447120228658]
We develop a generalization theory based on the probably approximately correct (PAC) learning framework.
We show that imposing a learner does not make a learning problem harder in the sense that any PAC learnable class is also a constrained learner.
We analyze the properties of this solution and use it to illustrate how constrained learning can address problems in fair and robust classification.
arXiv Detail & Related papers (2020-06-09T19:59:29Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.