The Adversarial Security Mitigations of mmWave Beamforming Prediction
Models using Defensive Distillation and Adversarial Retraining
- URL: http://arxiv.org/abs/2202.08185v1
- Date: Wed, 16 Feb 2022 16:47:17 GMT
- Title: The Adversarial Security Mitigations of mmWave Beamforming Prediction
Models using Defensive Distillation and Adversarial Retraining
- Authors: Murat Kuzlu, Ferhat Ozgur Catak, Umit Cali, Evren Catak, Ozgur Guler
- Abstract summary: This paper presents the security vulnerabilities in deep learning for beamforming prediction using deep neural networks (DNNs) in 6G wireless networks.
The proposed scheme can be used in situations where the data are corrupted due to the adversarial examples in the training data.
- Score: 0.41998444721319217
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The design of a security scheme for beamforming prediction is critical for
next-generation wireless networks (5G, 6G, and beyond). However, there is no
consensus about protecting the beamforming prediction using deep learning
algorithms in these networks. This paper presents the security vulnerabilities
in deep learning for beamforming prediction using deep neural networks (DNNs)
in 6G wireless networks, which treats the beamforming prediction as a
multi-output regression problem. It is indicated that the initial DNN model is
vulnerable against adversarial attacks, such as Fast Gradient Sign Method
(FGSM), Basic Iterative Method (BIM), Projected Gradient Descent (PGD), and
Momentum Iterative Method (MIM), because the initial DNN model is sensitive to
the perturbations of the adversarial samples of the training data. This study
also offers two mitigation methods, such as adversarial training and defensive
distillation, for adversarial attacks against artificial intelligence
(AI)-based models used in the millimeter-wave (mmWave) beamforming prediction.
Furthermore, the proposed scheme can be used in situations where the data are
corrupted due to the adversarial examples in the training data. Experimental
results show that the proposed methods effectively defend the DNN models
against adversarial attacks in next-generation wireless networks.
Related papers
- Securing Graph Neural Networks in MLaaS: A Comprehensive Realization of Query-based Integrity Verification [68.86863899919358]
We introduce a groundbreaking approach to protect GNN models in Machine Learning from model-centric attacks.
Our approach includes a comprehensive verification schema for GNN's integrity, taking into account both transductive and inductive GNNs.
We propose a query-based verification technique, fortified with innovative node fingerprint generation algorithms.
arXiv Detail & Related papers (2023-12-13T03:17:05Z) - Threat Model-Agnostic Adversarial Defense using Diffusion Models [14.603209216642034]
Deep Neural Networks (DNNs) are highly sensitive to imperceptible malicious perturbations, known as adversarial attacks.
Deep Neural Networks (DNNs) are highly sensitive to imperceptible malicious perturbations, known as adversarial attacks.
arXiv Detail & Related papers (2022-07-17T06:50:48Z) - Improved and Interpretable Defense to Transferred Adversarial Examples
by Jacobian Norm with Selective Input Gradient Regularization [31.516568778193157]
Adversarial training (AT) is often adopted to improve the robustness of deep neural networks (DNNs)
In this work, we propose an approach based on Jacobian norm and Selective Input Gradient Regularization (J- SIGR)
Experiments demonstrate that the proposed J- SIGR confers improved robustness against transferred adversarial attacks, and we also show that the predictions from the neural network are easy to interpret.
arXiv Detail & Related papers (2022-07-09T01:06:41Z) - Downlink Power Allocation in Massive MIMO via Deep Learning: Adversarial
Attacks and Training [62.77129284830945]
This paper considers a regression problem in a wireless setting and shows that adversarial attacks can break the DL-based approach.
We also analyze the effectiveness of adversarial training as a defensive technique in adversarial settings and show that the robustness of DL-based wireless system against attacks improves significantly.
arXiv Detail & Related papers (2022-06-14T04:55:11Z) - Distributed Adversarial Training to Robustify Deep Neural Networks at
Scale [100.19539096465101]
Current deep neural networks (DNNs) are vulnerable to adversarial attacks, where adversarial perturbations to the inputs can change or manipulate classification.
To defend against such attacks, an effective approach, known as adversarial training (AT), has been shown to mitigate robust training.
We propose a large-batch adversarial training framework implemented over multiple machines.
arXiv Detail & Related papers (2022-06-13T15:39:43Z) - Mixture GAN For Modulation Classification Resiliency Against Adversarial
Attacks [55.92475932732775]
We propose a novel generative adversarial network (GAN)-based countermeasure approach.
GAN-based aims to eliminate the adversarial attack examples before feeding to the DNN-based classifier.
Simulation results show the effectiveness of our proposed defense GAN so that it could enhance the accuracy of the DNN-based AMC under adversarial attacks to 81%, approximately.
arXiv Detail & Related papers (2022-05-29T22:30:32Z) - Improving Transformation-based Defenses against Adversarial Examples
with First-order Perturbations [16.346349209014182]
Studies show that neural networks are susceptible to adversarial attacks.
This exposes a potential threat to neural network-based intelligent systems.
We propose a method for counteracting adversarial perturbations to improve adversarial robustness.
arXiv Detail & Related papers (2021-03-08T06:27:24Z) - Identifying Untrustworthy Predictions in Neural Networks by Geometric
Gradient Analysis [4.148327474831389]
We propose a geometric gradient analysis (GGA) to improve the identification of untrustworthy predictions without retraining of a given model.
We demonstrate that the proposed method outperforms prior approaches in detecting OOD data and adversarial attacks.
arXiv Detail & Related papers (2021-02-24T10:49:02Z) - Targeted Attack against Deep Neural Networks via Flipping Limited Weight
Bits [55.740716446995805]
We study a novel attack paradigm, which modifies model parameters in the deployment stage for malicious purposes.
Our goal is to misclassify a specific sample into a target class without any sample modification.
By utilizing the latest technique in integer programming, we equivalently reformulate this BIP problem as a continuous optimization problem.
arXiv Detail & Related papers (2021-02-21T03:13:27Z) - Towards Adversarial-Resilient Deep Neural Networks for False Data
Injection Attack Detection in Power Grids [7.351477761427584]
False data injection attacks (FDIAs) pose a significant security threat to power system state estimation.
Recent studies have proposed machine learning (ML) techniques, particularly deep neural networks (DNNs)
arXiv Detail & Related papers (2021-02-17T22:26:34Z) - Trust but Verify: Assigning Prediction Credibility by Counterfactual
Constrained Learning [123.3472310767721]
Prediction credibility measures are fundamental in statistics and machine learning.
These measures should account for the wide variety of models used in practice.
The framework developed in this work expresses the credibility as a risk-fit trade-off.
arXiv Detail & Related papers (2020-11-24T19:52:38Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.