PEaRL: Personalized Privacy of Human-Centric Systems using Early-Exit Reinforcement Learning
- URL: http://arxiv.org/abs/2403.05864v2
- Date: Tue, 12 Nov 2024 23:12:55 GMT
- Title: PEaRL: Personalized Privacy of Human-Centric Systems using Early-Exit Reinforcement Learning
- Authors: Mojtaba Taherisadr, Salma Elmalaki,
- Abstract summary: This paper introduces PEaRL, a system designed to enhance privacy preservation by tailoring its approach to individual behavioral patterns and preferences.
On average, across both systems, PEaRL enhances privacy protection by 31%, with a corresponding utility reduction of 24%.
- Score: 0.5317624228510748
- License:
- Abstract: In the evolving landscape of human-centric systems, personalized privacy solutions are becoming increasingly crucial due to the dynamic nature of human interactions. Traditional static privacy models often fail to meet the diverse and changing privacy needs of users. This paper introduces PEaRL, a system designed to enhance privacy preservation by tailoring its approach to individual behavioral patterns and preferences. While incorporating reinforcement learning (RL) for its adaptability, PEaRL primarily focuses on employing an early-exit strategy that dynamically balances privacy protection and system utility. This approach addresses the challenges posed by the variability and evolution of human behavior, which static privacy models struggle to handle effectively. We evaluate PEaRL in two distinct contexts: Smart Home environments and Virtual Reality (VR) Smart Classrooms. The empirical results demonstrate PEaRL's capability to provide a personalized tradeoff between user privacy and application utility, adapting effectively to individual user preferences. On average, across both systems, PEaRL enhances privacy protection by 31%, with a corresponding utility reduction of 24%.
Related papers
- Preserving Expert-Level Privacy in Offline Reinforcement Learning [35.486119057117996]
We propose a consensus-based expert-level differentially private offline RL training approach compatible with any existing offline RL algorithm.
We prove rigorous differential privacy guarantees, while maintaining strong empirical performance.
arXiv Detail & Related papers (2024-11-18T21:26:53Z) - Masked Differential Privacy [64.32494202656801]
We propose an effective approach called masked differential privacy (DP), which allows for controlling sensitive regions where differential privacy is applied.
Our method operates selectively on data and allows for defining non-sensitive-temporal regions without DP application or combining differential privacy with other privacy techniques within data samples.
arXiv Detail & Related papers (2024-10-22T15:22:53Z) - Scalable Differential Privacy Mechanisms for Real-Time Machine Learning Applications [0.0]
Large language models (LLMs) are increasingly integrated into real-time machine learning applications, where safeguarding user privacy is paramount.
Traditional differential privacy mechanisms often struggle to balance privacy and accuracy, particularly in fast-changing environments with continuously flowing data.
We introduce Scalable Differential Privacy (SDP), a framework tailored for real-time machine learning that emphasizes both robust privacy guarantees and enhanced model performance.
arXiv Detail & Related papers (2024-09-16T20:52:04Z) - Mind the Privacy Unit! User-Level Differential Privacy for Language Model Fine-Tuning [62.224804688233]
differential privacy (DP) offers a promising solution by ensuring models are 'almost indistinguishable' with or without any particular privacy unit.
We study user-level DP motivated by applications where it necessary to ensure uniform privacy protection across users.
arXiv Detail & Related papers (2024-06-20T13:54:32Z) - AdvCloak: Customized Adversarial Cloak for Privacy Protection [47.42005175670807]
We propose AdvCloak, an innovative framework for privacy protection using generative models.
AdvCloak is designed to automatically customize class-wise adversarial masks that can maintain superior image-level naturalness.
We show that AdvCloak outperforms existing state-of-the-art methods in terms of efficiency and effectiveness.
arXiv Detail & Related papers (2023-12-22T03:18:04Z) - Echo of Neighbors: Privacy Amplification for Personalized Private
Federated Learning with Shuffle Model [21.077469463027306]
Federated Learning, as a popular paradigm for collaborative training, is vulnerable to privacy attacks.
This work builds up to strengthen model privacy under personalized local privacy by leveraging the privacy amplification effect of the shuffle model.
To the best of our knowledge, the impact of shuffling on personalized local privacy is considered for the first time.
arXiv Detail & Related papers (2023-04-11T21:48:42Z) - adaPARL: Adaptive Privacy-Aware Reinforcement Learning for
Sequential-Decision Making Human-in-the-Loop Systems [0.5414308305392761]
Reinforcement learning (RL) presents numerous benefits compared to rule-based approaches in various applications.
We propose adaPARL, an adaptive approach for privacy-aware RL, especially for human-in-the-loop IoT systems.
AdaPARL provides a personalized privacy-utility trade-off depending on human behavior and preference.
arXiv Detail & Related papers (2023-03-07T21:55:22Z) - Privacy-Preserving Reinforcement Learning Beyond Expectation [6.495883501989546]
Cyber and cyber-physical systems equipped with machine learning algorithms such as autonomous cars share environments with humans.
It is important to align system (or agent) behaviors with the preferences of one or more human users.
We consider the case when an agent has to learn behaviors in an unknown environment.
arXiv Detail & Related papers (2022-03-18T21:28:29Z) - Robustness Threats of Differential Privacy [70.818129585404]
We experimentally demonstrate that networks, trained with differential privacy, in some settings might be even more vulnerable in comparison to non-private versions.
We study how the main ingredients of differentially private neural networks training, such as gradient clipping and noise addition, affect the robustness of the model.
arXiv Detail & Related papers (2020-12-14T18:59:24Z) - Privacy and Robustness in Federated Learning: Attacks and Defenses [74.62641494122988]
We conduct the first comprehensive survey on this topic.
Through a concise introduction to the concept of FL, and a unique taxonomy covering: 1) threat models; 2) poisoning attacks and defenses against robustness; 3) inference attacks and defenses against privacy, we provide an accessible review of this important topic.
arXiv Detail & Related papers (2020-12-07T12:11:45Z) - Private Reinforcement Learning with PAC and Regret Guarantees [69.4202374491817]
We design privacy preserving exploration policies for episodic reinforcement learning (RL)
We first provide a meaningful privacy formulation using the notion of joint differential privacy (JDP)
We then develop a private optimism-based learning algorithm that simultaneously achieves strong PAC and regret bounds, and enjoys a JDP guarantee.
arXiv Detail & Related papers (2020-09-18T20:18:35Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.