Enhancing the Robustness of QMIX against State-adversarial Attacks
- URL: http://arxiv.org/abs/2307.00907v1
- Date: Mon, 3 Jul 2023 10:10:34 GMT
- Title: Enhancing the Robustness of QMIX against State-adversarial Attacks
- Authors: Weiran Guo, Guanjun Liu, Ziyuan Zhou, Ling Wang, Jiacun Wang
- Abstract summary: We discuss four techniques to improve the robustness of SARL algorithms and extend them to multi-agent scenarios.
We train models using a variety of attacks in this research.
We then test the models taught using the other attacks by subjecting them to the corresponding attacks throughout the training phase.
- Score: 6.627954554805906
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Deep reinforcement learning (DRL) performance is generally impacted by
state-adversarial attacks, a perturbation applied to an agent's observation.
Most recent research has concentrated on robust single-agent reinforcement
learning (SARL) algorithms against state-adversarial attacks. Still, there has
yet to be much work on robust multi-agent reinforcement learning. Using QMIX,
one of the popular cooperative multi-agent reinforcement algorithms, as an
example, we discuss four techniques to improve the robustness of SARL
algorithms and extend them to multi-agent scenarios. To increase the robustness
of multi-agent reinforcement learning (MARL) algorithms, we train models using
a variety of attacks in this research. We then test the models taught using the
other attacks by subjecting them to the corresponding attacks throughout the
training phase. In this way, we organize and summarize techniques for enhancing
robustness when used with MARL.
Related papers
- Mitigating Adversarial Perturbations for Deep Reinforcement Learning via Vector Quantization [18.56608399174564]
Well-performing reinforcement learning (RL) agents often lack resilience against adversarial perturbations during deployment.
This highlights the importance of building a robust agent before deploying it in the real world.
In this work, we study an input transformation-based defense for RL.
arXiv Detail & Related papers (2024-10-04T12:41:54Z) - Efficient Adversarial Training in LLMs with Continuous Attacks [99.5882845458567]
Large language models (LLMs) are vulnerable to adversarial attacks that can bypass their safety guardrails.
We propose a fast adversarial training algorithm (C-AdvUL) composed of two losses.
C-AdvIPO is an adversarial variant of IPO that does not require utility data for adversarially robust alignment.
arXiv Detail & Related papers (2024-05-24T14:20:09Z) - Enhancing ML-Based DoS Attack Detection Through Combinatorial Fusion
Analysis [2.7973964073307265]
Mitigating Denial-of-Service (DoS) attacks is vital for online service security and availability.
We suggest an innovative method, fusion, which combines multiple ML models using advanced algorithms.
Our findings emphasize the potential of this approach to improve DoS attack detection and contribute to stronger defense mechanisms.
arXiv Detail & Related papers (2023-10-02T02:21:48Z) - MA2CL:Masked Attentive Contrastive Learning for Multi-Agent
Reinforcement Learning [128.19212716007794]
We propose an effective framework called textbfMulti-textbfAgent textbfMasked textbfAttentive textbfContrastive textbfLearning (MA2CL)
MA2CL encourages learning representation to be both temporal and agent-level predictive by reconstructing the masked agent observation in latent space.
Our method significantly improves the performance and sample efficiency of different MARL algorithms and outperforms other methods in various vision-based and state-based scenarios.
arXiv Detail & Related papers (2023-06-03T05:32:19Z) - Sparse Adversarial Attack in Multi-agent Reinforcement Learning [18.876664289847422]
We propose a textitsparse adversarial attack on cMARL systems.
Experiments show that the policy trained by the current cMARL algorithm can obtain poor performance when only one or a few agents were attacked.
arXiv Detail & Related papers (2022-05-19T07:46:26Z) - Model-Agnostic Meta-Attack: Towards Reliable Evaluation of Adversarial
Robustness [53.094682754683255]
We propose a Model-Agnostic Meta-Attack (MAMA) approach to discover stronger attack algorithms automatically.
Our method learns the in adversarial attacks parameterized by a recurrent neural network.
We develop a model-agnostic training algorithm to improve the ability of the learned when attacking unseen defenses.
arXiv Detail & Related papers (2021-10-13T13:54:24Z) - Softmax with Regularization: Better Value Estimation in Multi-Agent
Reinforcement Learning [72.28520951105207]
Overestimation in $Q$-learning is an important problem that has been extensively studied in single-agent reinforcement learning.
We propose a novel regularization-based update scheme that penalizes large joint action-values deviating from a baseline.
We show that our method provides a consistent performance improvement on a set of challenging StarCraft II micromanagement tasks.
arXiv Detail & Related papers (2021-03-22T14:18:39Z) - Adversarial attacks in consensus-based multi-agent reinforcement
learning [0.0]
We show that an adversarial agent can persuade all the other agents in the network to implement policies that optimize an objective that it desires.
In this sense, the standard consensus-based MARL algorithms are fragile to attacks.
arXiv Detail & Related papers (2021-03-11T21:44:18Z) - Robust Reinforcement Learning on State Observations with Learned Optimal
Adversary [86.0846119254031]
We study the robustness of reinforcement learning with adversarially perturbed state observations.
With a fixed agent policy, we demonstrate that an optimal adversary to perturb state observations can be found.
For DRL settings, this leads to a novel empirical adversarial attack to RL agents via a learned adversary that is much stronger than previous ones.
arXiv Detail & Related papers (2021-01-21T05:38:52Z) - Robust Deep Reinforcement Learning through Adversarial Loss [74.20501663956604]
Recent studies have shown that deep reinforcement learning agents are vulnerable to small adversarial perturbations on the agent's inputs.
We propose RADIAL-RL, a principled framework to train reinforcement learning agents with improved robustness against adversarial attacks.
arXiv Detail & Related papers (2020-08-05T07:49:42Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.