Local Environment Poisoning Attacks on Federated Reinforcement Learning
- URL: http://arxiv.org/abs/2303.02725v4
- Date: Thu, 4 Jan 2024 23:44:12 GMT
- Title: Local Environment Poisoning Attacks on Federated Reinforcement Learning
- Authors: Evelyn Ma, Praneet Rathi, and S. Rasoul Etesami
- Abstract summary: Federated learning (FL) has become a popular tool for solving traditional Reinforcement Learning (RL) tasks.
Federated mechanism exposes the system to poisoning by malicious agents that can mislead the trained policy.
We propose a general framework to characterize FRL poisoning as an optimization problem and design a poisoning protocol that can be applied to policy-based FRL.
- Score: 1.5020330976600738
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Federated learning (FL) has become a popular tool for solving traditional
Reinforcement Learning (RL) tasks. The multi-agent structure addresses the
major concern of data-hungry in traditional RL, while the federated mechanism
protects the data privacy of individual agents. However, the federated
mechanism also exposes the system to poisoning by malicious agents that can
mislead the trained policy. Despite the advantage brought by FL, the
vulnerability of Federated Reinforcement Learning (FRL) has not been
well-studied before. In this work, we propose a general framework to
characterize FRL poisoning as an optimization problem and design a poisoning
protocol that can be applied to policy-based FRL. Our framework can also be
extended to FRL with actor-critic as a local RL algorithm by training a pair of
private and public critics. We provably show that our method can strictly hurt
the global objective. We verify our poisoning effectiveness by conducting
extensive experiments targeting mainstream RL algorithms and over various RL
OpenAI Gym environments covering a wide range of difficulty levels. Within
these experiments, we compare clean and baseline poisoning methods against our
proposed framework. The results show that the proposed framework is successful
in poisoning FRL systems and reducing performance across various environments
and does so more effectively than baseline methods. Our work provides new
insights into the vulnerability of FL in RL training and poses new challenges
for designing robust FRL algorithms
Related papers
- ReRoGCRL: Representation-based Robustness in Goal-Conditioned
Reinforcement Learning [29.868059421372244]
Goal-Conditioned Reinforcement Learning (GCRL) has gained attention, but its algorithmic robustness against adversarial perturbations remains unexplored.
We first propose the Semi-Contrastive Representation attack, inspired by the adversarial contrastive attack.
We then introduce Adversarial Representation Tactics, which combines Semi-Contrastive Adversarial Augmentation with Sensitivity-Aware Regularizer.
arXiv Detail & Related papers (2023-12-12T16:05:55Z) - Combating Exacerbated Heterogeneity for Robust Models in Federated
Learning [91.88122934924435]
Combination of adversarial training and federated learning can lead to the undesired robustness deterioration.
We propose a novel framework called Slack Federated Adversarial Training (SFAT)
We verify the rationality and effectiveness of SFAT on various benchmarked and real-world datasets.
arXiv Detail & Related papers (2023-03-01T06:16:15Z) - Flexible Attention-Based Multi-Policy Fusion for Efficient Deep
Reinforcement Learning [78.31888150539258]
Reinforcement learning (RL) agents have long sought to approach the efficiency of human learning.
Prior studies in RL have incorporated external knowledge policies to help agents improve sample efficiency.
We present Knowledge-Grounded RL (KGRL), an RL paradigm fusing multiple knowledge policies and aiming for human-like efficiency and flexibility.
arXiv Detail & Related papers (2022-10-07T17:56:57Z) - FIRE: A Failure-Adaptive Reinforcement Learning Framework for Edge Computing Migrations [52.85536740465277]
FIRE is a framework that adapts to rare events by training a RL policy in an edge computing digital twin environment.
We propose ImRE, an importance sampling-based Q-learning algorithm, which samples rare events proportionally to their impact on the value function.
We show that FIRE reduces costs compared to vanilla RL and the greedy baseline in the event of failures.
arXiv Detail & Related papers (2022-09-28T19:49:39Z) - Mastering the Unsupervised Reinforcement Learning Benchmark from Pixels [112.63440666617494]
Reinforcement learning algorithms can succeed but require large amounts of interactions between the agent and the environment.
We propose a new method to solve it, using unsupervised model-based RL, for pre-training the agent.
We show robust performance on the Real-Word RL benchmark, hinting at resiliency to environment perturbations during adaptation.
arXiv Detail & Related papers (2022-09-24T14:22:29Z) - Fault-Tolerant Federated Reinforcement Learning with Theoretical
Guarantee [25.555844784263236]
We propose the first Federated Reinforcement Learning framework that is tolerant to less than half of the participating agents being random system failures or adversarial attackers.
All theoretical results are empirically verified on various RL benchmark tasks.
arXiv Detail & Related papers (2021-10-26T23:01:22Z) - Federated Reinforcement Learning: Techniques, Applications, and Open
Challenges [4.749929332500373]
Federated Reinforcement Learning (FRL) is an emerging and promising field in Reinforcement Learning (RL)
FRL algorithms can be divided into two categories, i.e. Horizontal Federated Reinforcement Learning (HFRL) and Vertical Federated Reinforcement Learning (VFRL)
arXiv Detail & Related papers (2021-08-26T16:22:49Z) - Combining Pessimism with Optimism for Robust and Efficient Model-Based
Deep Reinforcement Learning [56.17667147101263]
In real-world tasks, reinforcement learning agents encounter situations that are not present during training time.
To ensure reliable performance, the RL agents need to exhibit robustness against worst-case situations.
We propose the Robust Hallucinated Upper-Confidence RL (RH-UCRL) algorithm to provably solve this problem.
arXiv Detail & Related papers (2021-03-18T16:50:17Z) - Vulnerability-Aware Poisoning Mechanism for Online RL with Unknown
Dynamics [23.014304618646598]
Poisoning attacks on Reinforcement Learning (RL) systems could take advantage of RL algorithm's vulnerabilities and cause failure of the learning.
We build a generic poisoning framework for online RL via a comprehensive investigation of heterogeneous poisoning models in RL.
We propose a strategic poisoning algorithm called Vulnerability-Aware Adversarial Critic Poison (VA2C-P), which works for most policy-based deep RL agents.
arXiv Detail & Related papers (2020-09-02T01:43:30Z) - Robust Deep Reinforcement Learning through Adversarial Loss [74.20501663956604]
Recent studies have shown that deep reinforcement learning agents are vulnerable to small adversarial perturbations on the agent's inputs.
We propose RADIAL-RL, a principled framework to train reinforcement learning agents with improved robustness against adversarial attacks.
arXiv Detail & Related papers (2020-08-05T07:49:42Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.