Multi-Task Federated Reinforcement Learning with Adversaries
- URL: http://arxiv.org/abs/2103.06473v1
- Date: Thu, 11 Mar 2021 05:39:52 GMT
- Title: Multi-Task Federated Reinforcement Learning with Adversaries
- Authors: Aqeel Anwar, Arijit Raychowdhury
- Abstract summary: Reinforcement learning algorithms pose a serious threat from adversaries.
In this paper, we analyze the Multi-task Federated Reinforcement Learning algorithms.
We propose an adaptive attack method with better attack performance.
- Score: 2.6080102941802106
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Reinforcement learning algorithms, just like any other Machine learning
algorithm pose a serious threat from adversaries. The adversaries can
manipulate the learning algorithm resulting in non-optimal policies. In this
paper, we analyze the Multi-task Federated Reinforcement Learning algorithms,
where multiple collaborative agents in various environments are trying to
maximize the sum of discounted return, in the presence of adversarial agents.
We argue that the common attack methods are not guaranteed to carry out a
successful attack on Multi-task Federated Reinforcement Learning and propose an
adaptive attack method with better attack performance. Furthermore, we modify
the conventional federated reinforcement learning algorithm to address the
issue of adversaries that works equally well with and without the adversaries.
Experimentation on different small to mid-size reinforcement learning problems
show that the proposed attack method outperforms other general attack methods
and the proposed modification to federated reinforcement learning algorithm was
able to achieve near-optimal policies in the presence of adversarial agents.
Related papers
- Efficient Adversarial Training in LLMs with Continuous Attacks [99.5882845458567]
Large language models (LLMs) are vulnerable to adversarial attacks that can bypass their safety guardrails.
We propose a fast adversarial training algorithm (C-AdvUL) composed of two losses.
C-AdvIPO is an adversarial variant of IPO that does not require utility data for adversarially robust alignment.
arXiv Detail & Related papers (2024-05-24T14:20:09Z) - Multi-granular Adversarial Attacks against Black-box Neural Ranking Models [111.58315434849047]
We create high-quality adversarial examples by incorporating multi-granular perturbations.
We transform the multi-granular attack into a sequential decision-making process.
Our attack method surpasses prevailing baselines in both attack effectiveness and imperceptibility.
arXiv Detail & Related papers (2024-04-02T02:08:29Z) - PGN: A perturbation generation network against deep reinforcement
learning [8.546103661706391]
We propose a novel generative model for creating effective adversarial examples to attack the agent.
Considering the specificity of deep reinforcement learning, we propose the action consistency ratio as a measure of stealthiness.
arXiv Detail & Related papers (2023-12-20T10:40:41Z) - Attacking and Defending Deep Reinforcement Learning Policies [3.6985039575807246]
We study robustness of DRL policies to adversarial attacks from the perspective of robust optimization.
We propose a greedy attack algorithm, which tries to minimize the expected return of the policy without interacting with the environment, and a defense algorithm, which performs adversarial training in a max-min form.
arXiv Detail & Related papers (2022-05-16T12:47:54Z) - Adversarial Robustness of Deep Reinforcement Learning based Dynamic
Recommender Systems [50.758281304737444]
We propose to explore adversarial examples and attack detection on reinforcement learning-based interactive recommendation systems.
We first craft different types of adversarial examples by adding perturbations to the input and intervening on the casual factors.
Then, we augment recommendation systems by detecting potential attacks with a deep learning-based classifier based on the crafted data.
arXiv Detail & Related papers (2021-12-02T04:12:24Z) - Model-Agnostic Meta-Attack: Towards Reliable Evaluation of Adversarial
Robustness [53.094682754683255]
We propose a Model-Agnostic Meta-Attack (MAMA) approach to discover stronger attack algorithms automatically.
Our method learns the in adversarial attacks parameterized by a recurrent neural network.
We develop a model-agnostic training algorithm to improve the ability of the learned when attacking unseen defenses.
arXiv Detail & Related papers (2021-10-13T13:54:24Z) - Improving Safety in Deep Reinforcement Learning using Unsupervised
Action Planning [4.2955354157580325]
One of the key challenges to deep reinforcement learning (deep RL) is to ensure safety at both training and testing phases.
We propose a novel technique of unsupervised action planning to improve the safety of on-policy reinforcement learning algorithms.
Our results show that the proposed safety RL algorithm can achieve higher rewards compared with multiple baselines in both discrete and continuous control problems.
arXiv Detail & Related papers (2021-09-29T10:26:29Z) - Robust Federated Learning with Attack-Adaptive Aggregation [45.60981228410952]
Federated learning is vulnerable to various attacks, such as model poisoning and backdoor attacks.
We propose an attack-adaptive aggregation strategy to defend against various attacks for robust learning.
arXiv Detail & Related papers (2021-02-10T04:23:23Z) - A Hamiltonian Monte Carlo Method for Probabilistic Adversarial Attack
and Learning [122.49765136434353]
We present an effective method, called Hamiltonian Monte Carlo with Accumulated Momentum (HMCAM), aiming to generate a sequence of adversarial examples.
We also propose a new generative method called Contrastive Adversarial Training (CAT), which approaches equilibrium distribution of adversarial examples.
Both quantitative and qualitative analysis on several natural image datasets and practical systems have confirmed the superiority of the proposed algorithm.
arXiv Detail & Related papers (2020-10-15T16:07:26Z) - Adversarial Augmentation Policy Search for Domain and Cross-Lingual
Generalization in Reading Comprehension [96.62963688510035]
Reading comprehension models often overfit to nuances of training datasets and fail at adversarial evaluation.
We present several effective adversaries and automated data augmentation policy search methods with the goal of making reading comprehension models more robust to adversarial evaluation.
arXiv Detail & Related papers (2020-04-13T17:20:08Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.