Adversarial jamming attacks and defense strategies via adaptive deep
reinforcement learning
- URL: http://arxiv.org/abs/2007.06055v1
- Date: Sun, 12 Jul 2020 18:16:00 GMT
- Title: Adversarial jamming attacks and defense strategies via adaptive deep
reinforcement learning
- Authors: Feng Wang, Chen Zhong, M. Cenk Gursoy and Senem Velipasalar
- Abstract summary: In this paper, we consider a victim user that performs DRL-based dynamic channel access, and an attacker that executes DRLbased jamming attacks to disrupt the victim.
Both the victim and attacker are DRL agents and can interact with each other, retrain their models, and adapt to opponents' policies.
We propose three defense strategies to maximize the attacked victim's accuracy and evaluate their performances.
- Score: 12.11027948206573
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: As the applications of deep reinforcement learning (DRL) in wireless
communications grow, sensitivity of DRL based wireless communication strategies
against adversarial attacks has started to draw increasing attention. In order
to address such sensitivity and alleviate the resulting security concerns, we
in this paper consider a victim user that performs DRL-based dynamic channel
access, and an attacker that executes DRLbased jamming attacks to disrupt the
victim. Hence, both the victim and attacker are DRL agents and can interact
with each other, retrain their models, and adapt to opponents' policies. In
this setting, we initially develop an adversarial jamming attack policy that
aims at minimizing the accuracy of victim's decision making on dynamic channel
access. Subsequently, we devise defense strategies against such an attacker,
and propose three defense strategies, namely diversified defense with
proportional-integral-derivative (PID) control, diversified defense with an
imitation attacker, and defense via orthogonal policies. We design these
strategies to maximize the attacked victim's accuracy and evaluate their
performances.
Related papers
- Less is More: A Stealthy and Efficient Adversarial Attack Method for DRL-based Autonomous Driving Policies [2.9965913883475137]
We present a stealthy and efficient adversarial attack method for DRL-based autonomous driving policies.
We train the adversary to learn the optimal policy for attacking at critical moments without domain knowledge.
Our method achieves more than 90% collision rate within three attacks in most cases.
arXiv Detail & Related papers (2024-12-04T06:11:09Z) - Optimizing Cyber Defense in Dynamic Active Directories through Reinforcement Learning [10.601458163651582]
This paper addresses the absence of effective edge-blocking ACO strategies in dynamic, real-world networks.
It specifically targets the cybersecurity vulnerabilities of organizational Active Directory (AD) systems.
Unlike the existing literature on edge-blocking defenses which considers AD systems as static entities, our study counters this by recognizing their dynamic nature.
arXiv Detail & Related papers (2024-06-28T01:37:46Z) - Mutual-modality Adversarial Attack with Semantic Perturbation [81.66172089175346]
We propose a novel approach that generates adversarial attacks in a mutual-modality optimization scheme.
Our approach outperforms state-of-the-art attack methods and can be readily deployed as a plug-and-play solution.
arXiv Detail & Related papers (2023-12-20T05:06:01Z) - On the Difficulty of Defending Contrastive Learning against Backdoor
Attacks [58.824074124014224]
We show how contrastive backdoor attacks operate through distinctive mechanisms.
Our findings highlight the need for defenses tailored to the specificities of contrastive backdoor attacks.
arXiv Detail & Related papers (2023-12-14T15:54:52Z) - Optimal Attack and Defense for Reinforcement Learning [11.36770403327493]
In adversarial RL, an external attacker has the power to manipulate the victim agent's interaction with the environment.
We show the attacker's problem of designing a stealthy attack that maximizes its own expected reward.
We argue that the optimal defense policy for the victim can be computed as the solution to a Stackelberg game.
arXiv Detail & Related papers (2023-11-30T21:21:47Z) - Downlink Power Allocation in Massive MIMO via Deep Learning: Adversarial
Attacks and Training [62.77129284830945]
This paper considers a regression problem in a wireless setting and shows that adversarial attacks can break the DL-based approach.
We also analyze the effectiveness of adversarial training as a defensive technique in adversarial settings and show that the robustness of DL-based wireless system against attacks improves significantly.
arXiv Detail & Related papers (2022-06-14T04:55:11Z) - Attacking and Defending Deep Reinforcement Learning Policies [3.6985039575807246]
We study robustness of DRL policies to adversarial attacks from the perspective of robust optimization.
We propose a greedy attack algorithm, which tries to minimize the expected return of the policy without interacting with the environment, and a defense algorithm, which performs adversarial training in a max-min form.
arXiv Detail & Related papers (2022-05-16T12:47:54Z) - Adversarial Reinforcement Learning in Dynamic Channel Access and Power
Control [13.619849476923877]
Deep reinforcement learning (DRL) has recently been used to perform efficient resource allocation in wireless communications.
We consider multiple DRL agents that perform both dynamic channel access and power control in wireless interference channels.
We propose an adversarial jamming attack scheme that utilizes a listening phase and significantly degrades the users' sum rate.
arXiv Detail & Related papers (2021-05-12T17:27:21Z) - Guided Adversarial Attack for Evaluating and Enhancing Adversarial
Defenses [59.58128343334556]
We introduce a relaxation term to the standard loss, that finds more suitable gradient-directions, increases attack efficacy and leads to more efficient adversarial training.
We propose Guided Adversarial Margin Attack (GAMA), which utilizes function mapping of the clean image to guide the generation of adversaries.
We also propose Guided Adversarial Training (GAT), which achieves state-of-the-art performance amongst single-step defenses.
arXiv Detail & Related papers (2020-11-30T16:39:39Z) - Deflecting Adversarial Attacks [94.85315681223702]
We present a new approach towards ending this cycle where we "deflect" adversarial attacks by causing the attacker to produce an input that resembles the attack's target class.
We first propose a stronger defense based on Capsule Networks that combines three detection mechanisms to achieve state-of-the-art detection performance.
arXiv Detail & Related papers (2020-02-18T06:59:13Z) - Challenges and Countermeasures for Adversarial Attacks on Deep
Reinforcement Learning [48.49658986576776]
Deep Reinforcement Learning (DRL) has numerous applications in the real world thanks to its outstanding ability in adapting to the surrounding environments.
Despite its great advantages, DRL is susceptible to adversarial attacks, which precludes its use in real-life critical systems and applications.
This paper presents emerging attacks in DRL-based systems and the potential countermeasures to defend against these attacks.
arXiv Detail & Related papers (2020-01-27T10:53:11Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.