Continual Learning and Private Unlearning
- URL: http://arxiv.org/abs/2203.12817v1
- Date: Thu, 24 Mar 2022 02:40:33 GMT
- Title: Continual Learning and Private Unlearning
- Authors: Bo Liu, Qiang Liu, Peter Stone
- Abstract summary: This paper formalizes the continual learning and private unlearning (CLPU) problem.
It introduces a straightforward but exactly private solution, CLPU-DER++, as the first step towards solving the CLPU problem.
- Score: 49.848423659220444
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: As intelligent agents become autonomous over longer periods of time, they may
eventually become lifelong counterparts to specific people. If so, it may be
common for a user to want the agent to master a task temporarily but later on
to forget the task due to privacy concerns. However enabling an agent to
\emph{forget privately} what the user specified without degrading the rest of
the learned knowledge is a challenging problem. With the aim of addressing this
challenge, this paper formalizes this continual learning and private unlearning
(CLPU) problem. The paper further introduces a straightforward but exactly
private solution, CLPU-DER++, as the first step towards solving the CLPU
problem, along with a set of carefully designed benchmark problems to evaluate
the effectiveness of the proposed solution.
Related papers
- Vector Quantization Prompting for Continual Learning [23.26682439914273]
Continual learning requires to overcome catastrophic forgetting when training a single model on a sequence of tasks.
Recent top-performing approaches are prompt-based methods that utilize a set of learnable parameters to encode task knowledge.
We propose VQ-Prompt, a prompt-based continual learning method that incorporates Vector Quantization into end-to-end training of a set of discrete prompts.
arXiv Detail & Related papers (2024-10-27T13:43:53Z) - BloomWise: Enhancing Problem-Solving capabilities of Large Language Models using Bloom's-Taxonomy-Inspired Prompts [59.83547898874152]
We introduce BloomWise, a new prompting technique, inspired by Bloom's taxonomy, to improve the performance of Large Language Models (LLMs)
The decision regarding the need to employ more sophisticated cognitive skills is based on self-evaluation performed by the LLM.
In extensive experiments across 4 popular math reasoning datasets, we have demonstrated the effectiveness of our proposed approach.
arXiv Detail & Related papers (2024-10-05T09:27:52Z) - Eliciting Problem Specifications via Large Language Models [4.055489363682198]
Large language models (LLMs) can be utilized to map a problem class into a semi-formal specification.
A cognitive system can then use the problem-space specification to solve multiple instances of problems from the problem class.
arXiv Detail & Related papers (2024-05-20T16:19:02Z) - Improving Socratic Question Generation using Data Augmentation and Preference Optimization [2.1485350418225244]
Large language models (LLMs) can be used to augment human effort by automatically generating Socratic questions for students.
Existing methods that involve prompting these LLMs sometimes produce invalid outputs.
We propose a data augmentation method to enrich existing Socratic questioning datasets with questions that are invalid in specific ways.
Next, we propose a method to optimize open-source LLMs such as LLama 2 to prefer ground-truth questions over generated invalid ones.
arXiv Detail & Related papers (2024-03-01T00:08:20Z) - DP-OPT: Make Large Language Model Your Privacy-Preserving Prompt Engineer [57.04801796205638]
Large Language Models (LLMs) have emerged as dominant tools for various tasks.
However, concerns surrounding data privacy present obstacles due to the tuned prompts' dependency on sensitive private information.
We present Differentially-Private Offsite Prompt Tuning (DP-OPT) to address this challenge.
arXiv Detail & Related papers (2023-11-27T02:01:10Z) - You Only Live Once: Single-Life Reinforcement Learning [124.1738675154651]
In many real-world situations, the goal might not be to learn a policy that can do the task repeatedly, but simply to perform a new task successfully once in a single trial.
We formalize this problem setting, where an agent must complete a task within a single episode without interventions.
We propose an algorithm, $Q$-weighted adversarial learning (QWALE), which employs a distribution matching strategy.
arXiv Detail & Related papers (2022-10-17T09:00:11Z) - Knowledge acquisition via interactive Distributed Cognitive skill
Modules [0.0]
The human's cognitive capacity for problem solving is always limited to his/her educational background, skills, experiences, etc.
This work aims to introduce an early stage of a modular approach to procedural skill acquisition and storage via distributed cognitive skill modules.
arXiv Detail & Related papers (2022-10-13T01:41:11Z) - Learning with Recoverable Forgetting [77.56338597012927]
Learning wIth Recoverable Forgetting explicitly handles the task- or sample-specific knowledge removal and recovery.
Specifically, LIRF brings in two innovative schemes, namely knowledge deposit and withdrawal.
We conduct experiments on several datasets, and demonstrate that the proposed LIRF strategy yields encouraging results with gratifying generalization capability.
arXiv Detail & Related papers (2022-07-17T16:42:31Z) - Probably Approximately Correct Constrained Learning [135.48447120228658]
We develop a generalization theory based on the probably approximately correct (PAC) learning framework.
We show that imposing a learner does not make a learning problem harder in the sense that any PAC learnable class is also a constrained learner.
We analyze the properties of this solution and use it to illustrate how constrained learning can address problems in fair and robust classification.
arXiv Detail & Related papers (2020-06-09T19:59:29Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.