PAPER-HILT: Personalized and Adaptive Privacy-Aware Early-Exit for
Reinforcement Learning in Human-in-the-Loop Systems
- URL: http://arxiv.org/abs/2403.05864v1
- Date: Sat, 9 Mar 2024 10:24:12 GMT
- Title: PAPER-HILT: Personalized and Adaptive Privacy-Aware Early-Exit for
Reinforcement Learning in Human-in-the-Loop Systems
- Authors: Mojtaba Taherisadr, Salma Elmalaki
- Abstract summary: Reinforcement Learning (RL) has increasingly become a preferred method over traditional rule-based systems in diverse human-in-the-loop (HITL) applications.
This paper focuses on developing an innovative, adaptive RL strategy through exploiting an early-exit approach designed explicitly for privacy preservation in HITL environments.
- Score: 0.6282068591820944
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Reinforcement Learning (RL) has increasingly become a preferred method over
traditional rule-based systems in diverse human-in-the-loop (HITL) applications
due to its adaptability to the dynamic nature of human interactions. However,
integrating RL in such settings raises significant privacy concerns, as it
might inadvertently expose sensitive user information. Addressing this, our
paper focuses on developing PAPER-HILT, an innovative, adaptive RL strategy
through exploiting an early-exit approach designed explicitly for privacy
preservation in HITL environments. This approach dynamically adjusts the
tradeoff between privacy protection and system utility, tailoring its operation
to individual behavioral patterns and preferences. We mainly highlight the
challenge of dealing with the variable and evolving nature of human behavior,
which renders static privacy models ineffective. PAPER-HILT's effectiveness is
evaluated through its application in two distinct contexts: Smart Home
environments and Virtual Reality (VR) Smart Classrooms. The empirical results
demonstrate PAPER-HILT's capability to provide a personalized equilibrium
between user privacy and application utility, adapting effectively to
individual user needs and preferences. On average for both experiments, utility
(performance) drops by 24%, and privacy (state prediction) improves by 31%.
Related papers
- Advancing Personalized Federated Learning: Integrative Approaches with AI for Enhanced Privacy and Customization [0.0]
This paper proposes a novel approach that enhances PFL with cutting-edge AI techniques.
We present a model that boosts the performance of individual client models and ensures robust privacy-preserving mechanisms.
This work paves the way for a new era of truly personalized and privacy-conscious AI systems.
arXiv Detail & Related papers (2025-01-30T07:03:29Z) - Privacy-Preserving Personalized Federated Prompt Learning for Multimodal Large Language Models [11.747329476179223]
We propose a Differentially Private Federated Prompt Learning (DP-FPL) approach to tackle the challenge of balancing personalization and generalization.
Our approach mitigates the impact of privacy noise on the model performance while balancing the tradeoff between personalization and generalization.
arXiv Detail & Related papers (2025-01-23T18:34:09Z) - Activity Recognition on Avatar-Anonymized Datasets with Masked Differential Privacy [64.32494202656801]
Privacy-preserving computer vision is an important emerging problem in machine learning and artificial intelligence.
We present anonymization pipeline that replaces sensitive human subjects in video datasets with synthetic avatars within context.
We also proposeMaskDP to protect non-anonymized but privacy sensitive background information.
arXiv Detail & Related papers (2024-10-22T15:22:53Z) - Scalable Differential Privacy Mechanisms for Real-Time Machine Learning Applications [0.0]
Large language models (LLMs) are increasingly integrated into real-time machine learning applications, where safeguarding user privacy is paramount.
Traditional differential privacy mechanisms often struggle to balance privacy and accuracy, particularly in fast-changing environments with continuously flowing data.
We introduce Scalable Differential Privacy (SDP), a framework tailored for real-time machine learning that emphasizes both robust privacy guarantees and enhanced model performance.
arXiv Detail & Related papers (2024-09-16T20:52:04Z) - Mind the Privacy Unit! User-Level Differential Privacy for Language Model Fine-Tuning [62.224804688233]
differential privacy (DP) offers a promising solution by ensuring models are 'almost indistinguishable' with or without any particular privacy unit.
We study user-level DP motivated by applications where it necessary to ensure uniform privacy protection across users.
arXiv Detail & Related papers (2024-06-20T13:54:32Z) - AdvCloak: Customized Adversarial Cloak for Privacy Protection [47.42005175670807]
We propose AdvCloak, an innovative framework for privacy protection using generative models.
AdvCloak is designed to automatically customize class-wise adversarial masks that can maintain superior image-level naturalness.
We show that AdvCloak outperforms existing state-of-the-art methods in terms of efficiency and effectiveness.
arXiv Detail & Related papers (2023-12-22T03:18:04Z) - adaPARL: Adaptive Privacy-Aware Reinforcement Learning for
Sequential-Decision Making Human-in-the-Loop Systems [0.5414308305392761]
Reinforcement learning (RL) presents numerous benefits compared to rule-based approaches in various applications.
We propose adaPARL, an adaptive approach for privacy-aware RL, especially for human-in-the-loop IoT systems.
AdaPARL provides a personalized privacy-utility trade-off depending on human behavior and preference.
arXiv Detail & Related papers (2023-03-07T21:55:22Z) - Privacy-Preserving Reinforcement Learning Beyond Expectation [6.495883501989546]
Cyber and cyber-physical systems equipped with machine learning algorithms such as autonomous cars share environments with humans.
It is important to align system (or agent) behaviors with the preferences of one or more human users.
We consider the case when an agent has to learn behaviors in an unknown environment.
arXiv Detail & Related papers (2022-03-18T21:28:29Z) - Robustness Threats of Differential Privacy [70.818129585404]
We experimentally demonstrate that networks, trained with differential privacy, in some settings might be even more vulnerable in comparison to non-private versions.
We study how the main ingredients of differentially private neural networks training, such as gradient clipping and noise addition, affect the robustness of the model.
arXiv Detail & Related papers (2020-12-14T18:59:24Z) - Privacy and Robustness in Federated Learning: Attacks and Defenses [74.62641494122988]
We conduct the first comprehensive survey on this topic.
Through a concise introduction to the concept of FL, and a unique taxonomy covering: 1) threat models; 2) poisoning attacks and defenses against robustness; 3) inference attacks and defenses against privacy, we provide an accessible review of this important topic.
arXiv Detail & Related papers (2020-12-07T12:11:45Z) - Private Reinforcement Learning with PAC and Regret Guarantees [69.4202374491817]
We design privacy preserving exploration policies for episodic reinforcement learning (RL)
We first provide a meaningful privacy formulation using the notion of joint differential privacy (JDP)
We then develop a private optimism-based learning algorithm that simultaneously achieves strong PAC and regret bounds, and enjoys a JDP guarantee.
arXiv Detail & Related papers (2020-09-18T20:18:35Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.