Practical Adversarial Attacks Against AI-Driven Power Allocation in a
Distributed MIMO Network
- URL: http://arxiv.org/abs/2301.09305v1
- Date: Mon, 23 Jan 2023 07:51:25 GMT
- Title: Practical Adversarial Attacks Against AI-Driven Power Allocation in a
Distributed MIMO Network
- Authors: \"Omer Faruk Tuna, Fehmi Emre Kadan, Leyli Kara\c{c}ay
- Abstract summary: In distributed multiple-input multiple-output (D-MIMO) networks, power control is crucial to optimize the spectral efficiencies of users.
Deep neural network based artificial intelligence (AI) solutions are proposed to decrease the complexity.
In this work, we show that threats against the target AI model which might be originated from malicious users or radio units can substantially decrease the network performance.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In distributed multiple-input multiple-output (D-MIMO) networks, power
control is crucial to optimize the spectral efficiencies of users and max-min
fairness (MMF) power control is a commonly used strategy as it satisfies
uniform quality-of-service to all users. The optimal solution of MMF power
control requires high complexity operations and hence deep neural network based
artificial intelligence (AI) solutions are proposed to decrease the complexity.
Although quite accurate models can be achieved by using AI, these models have
some intrinsic vulnerabilities against adversarial attacks where carefully
crafted perturbations are applied to the input of the AI model. In this work,
we show that threats against the target AI model which might be originated from
malicious users or radio units can substantially decrease the network
performance by applying a successful adversarial sample, even in the most
constrained circumstances. We also demonstrate that the risk associated with
these kinds of adversarial attacks is higher than the conventional attack
threats. Detailed simulations reveal the effectiveness of adversarial attacks
and the necessity of smart defense techniques.
Related papers
- L-AutoDA: Leveraging Large Language Models for Automated Decision-based Adversarial Attacks [16.457528502745415]
This work introduces L-AutoDA, a novel approach leveraging the generative capabilities of Large Language Models (LLMs) to automate the design of adversarial attacks.
By iteratively interacting with LLMs in an evolutionary framework, L-AutoDA automatically designs competitive attack algorithms efficiently without much human effort.
We demonstrate the efficacy of L-AutoDA on CIFAR-10 dataset, showing significant improvements over baseline methods in both success rate and computational efficiency.
arXiv Detail & Related papers (2024-01-27T07:57:20Z) - Defense against ML-based Power Side-channel Attacks on DNN Accelerators with Adversarial Attacks [21.611341074006162]
We present AIAShield, a novel defense methodology to safeguard FPGA-based AI accelerators.
We leverage the prominent adversarial attack technique from the machine learning community to craft delicate noise.
AIAShield outperforms existing solutions with excellent transferability.
arXiv Detail & Related papers (2023-12-07T04:38:01Z) - A Unified Hardware-based Threat Detector for AI Accelerators [12.96840649714218]
We design UniGuard, a novel unified and non-intrusive detection methodology to safeguard FPGA-based AI accelerators.
We employ a Time-to-Digital Converter to capture power fluctuations and train a supervised machine learning model to identify various types of threats.
arXiv Detail & Related papers (2023-11-28T10:55:02Z) - Multi-Objective Optimization for UAV Swarm-Assisted IoT with Virtual
Antenna Arrays [55.736718475856726]
Unmanned aerial vehicle (UAV) network is a promising technology for assisting Internet-of-Things (IoT)
Existing UAV-assisted data harvesting and dissemination schemes require UAVs to frequently fly between the IoTs and access points.
We introduce collaborative beamforming into IoTs and UAVs simultaneously to achieve energy and time-efficient data harvesting and dissemination.
arXiv Detail & Related papers (2023-08-03T02:49:50Z) - Artificial Intelligence Empowered Multiple Access for Ultra Reliable and
Low Latency THz Wireless Networks [76.89730672544216]
Terahertz (THz) wireless networks are expected to catalyze the beyond fifth generation (B5G) era.
To satisfy the ultra-reliability and low-latency demands of several B5G applications, novel mobility management approaches are required.
This article presents a holistic MAC layer approach that enables intelligent user association and resource allocation, as well as flexible and adaptive mobility management.
arXiv Detail & Related papers (2022-08-17T03:00:24Z) - A Multi-objective Memetic Algorithm for Auto Adversarial Attack
Optimization Design [1.9100854225243937]
Well-designed adversarial defense strategies can improve the robustness of deep learning models against adversarial examples.
Given the defensed model, the efficient adversarial attack with less computational burden and lower robust accuracy is needed to be further exploited.
We propose a multi-objective memetic algorithm for auto adversarial attack optimization design, which realizes the automatical search for the near-optimal adversarial attack towards defensed models.
arXiv Detail & Related papers (2022-08-15T03:03:05Z) - Optimization for Master-UAV-powered Auxiliary-Aerial-IRS-assisted IoT
Networks: An Option-based Multi-agent Hierarchical Deep Reinforcement
Learning Approach [56.84948632954274]
This paper investigates a master unmanned aerial vehicle (MUAV)-powered Internet of Things (IoT) network.
We propose using a rechargeable auxiliary UAV (AUAV) equipped with an intelligent reflecting surface (IRS) to enhance the communication signals from the MUAV.
Under the proposed model, we investigate the optimal collaboration strategy of these energy-limited UAVs to maximize the accumulated throughput of the IoT network.
arXiv Detail & Related papers (2021-12-20T15:45:28Z) - Improving Robustness of Reinforcement Learning for Power System Control
with Adversarial Training [71.7750435554693]
We show that several state-of-the-art RL agents proposed for power system control are vulnerable to adversarial attacks.
Specifically, we use an adversary Markov Decision Process to learn an attack policy, and demonstrate the potency of our attack.
We propose to use adversarial training to increase the robustness of RL agent against attacks and avoid infeasible operational decisions.
arXiv Detail & Related papers (2021-10-18T00:50:34Z) - A Practical Adversarial Attack on Contingency Detection of Smart Energy
Systems [0.0]
We propose an innovative adversarial attack model that can practically compromise dynamical controls of energy system.
We also optimize the deployment of the proposed adversarial attack model by employing deep reinforcement learning (RL) techniques.
arXiv Detail & Related papers (2021-09-13T23:11:56Z) - Adaptive Feature Alignment for Adversarial Training [56.17654691470554]
CNNs are typically vulnerable to adversarial attacks, which pose a threat to security-sensitive applications.
We propose the adaptive feature alignment (AFA) to generate features of arbitrary attacking strengths.
Our method is trained to automatically align features of arbitrary attacking strength.
arXiv Detail & Related papers (2021-05-31T17:01:05Z) - Covert Model Poisoning Against Federated Learning: Algorithm Design and
Optimization [76.51980153902774]
Federated learning (FL) is vulnerable to external attacks on FL models during parameters transmissions.
In this paper, we propose effective MP algorithms to combat state-of-the-art defensive aggregation mechanisms.
Our experimental results demonstrate that the proposed CMP algorithms are effective and substantially outperform existing attack mechanisms.
arXiv Detail & Related papers (2021-01-28T03:28:18Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.