Towards Adversarial-Resilient Deep Neural Networks for False Data
Injection Attack Detection in Power Grids
- URL: http://arxiv.org/abs/2102.09057v2
- Date: Wed, 10 May 2023 21:39:53 GMT
- Title: Towards Adversarial-Resilient Deep Neural Networks for False Data
Injection Attack Detection in Power Grids
- Authors: Jiangnan Li, Yingyuan Yang, Jinyuan Stella Sun, Kevin Tomsovic,
Hairong Qi
- Abstract summary: False data injection attacks (FDIAs) pose a significant security threat to power system state estimation.
Recent studies have proposed machine learning (ML) techniques, particularly deep neural networks (DNNs)
- Score: 7.351477761427584
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: False data injection attacks (FDIAs) pose a significant security threat to
power system state estimation. To detect such attacks, recent studies have
proposed machine learning (ML) techniques, particularly deep neural networks
(DNNs). However, most of these methods fail to account for the risk posed by
adversarial measurements, which can compromise the reliability of DNNs in
various ML applications. In this paper, we present a DNN-based FDIA detection
approach that is resilient to adversarial attacks. We first analyze several
adversarial defense mechanisms used in computer vision and show their inherent
limitations in FDIA detection. We then propose an adversarial-resilient DNN
detection framework for FDIA that incorporates random input padding in both the
training and inference phases. Our simulations, based on an IEEE standard power
system, demonstrate that this framework significantly reduces the effectiveness
of adversarial attacks while having a negligible impact on the DNNs' detection
performance.
Related papers
- Securing Graph Neural Networks in MLaaS: A Comprehensive Realization of Query-based Integrity Verification [68.86863899919358]
We introduce a groundbreaking approach to protect GNN models in Machine Learning from model-centric attacks.
Our approach includes a comprehensive verification schema for GNN's integrity, taking into account both transductive and inductive GNNs.
We propose a query-based verification technique, fortified with innovative node fingerprint generation algorithms.
arXiv Detail & Related papers (2023-12-13T03:17:05Z) - A Geometrical Approach to Evaluate the Adversarial Robustness of Deep
Neural Networks [52.09243852066406]
Adversarial Converging Time Score (ACTS) measures the converging time as an adversarial robustness metric.
We validate the effectiveness and generalization of the proposed ACTS metric against different adversarial attacks on the large-scale ImageNet dataset.
arXiv Detail & Related papers (2023-10-10T09:39:38Z) - Unfolding Local Growth Rate Estimates for (Almost) Perfect Adversarial
Detection [22.99930028876662]
Convolutional neural networks (CNN) define the state-of-the-art solution on many perceptual tasks.
Current CNN approaches largely remain vulnerable against adversarial perturbations of the input that have been crafted specifically to fool the system.
We propose a simple and light-weight detector, which leverages recent findings on the relation between networks' local intrinsic dimensionality (LID) and adversarial attacks.
arXiv Detail & Related papers (2022-12-13T17:51:32Z) - DNNShield: Dynamic Randomized Model Sparsification, A Defense Against
Adversarial Machine Learning [2.485182034310304]
We propose a hardware-accelerated defense against machine learning attacks.
DNNSHIELD adapts the strength of the response to the confidence of the adversarial input.
We show an adversarial detection rate of 86% when applied to VGG16 and 88% when applied to ResNet50.
arXiv Detail & Related papers (2022-07-31T19:29:44Z) - An Intrusion Detection System based on Deep Belief Networks [1.535077825808595]
We develop and evaluate the performance of DBN on detecting cyber-attacks within a network of connected devices.
Our proposed DBN approach shows competitive and promising results, with significant improvement on the detection of attacks underrepresented in the training dataset.
arXiv Detail & Related papers (2022-07-05T15:38:24Z) - TESDA: Transform Enabled Statistical Detection of Attacks in Deep Neural
Networks [0.0]
We present TESDA, a low-overhead, flexible, and statistically grounded method for online detection of attacks.
Unlike most prior work, we require neither dedicated hardware to run in real-time, nor the presence of a Trojan trigger to detect discrepancies in behavior.
We empirically establish our method's usefulness and practicality across multiple architectures, datasets and diverse attacks.
arXiv Detail & Related papers (2021-10-16T02:10:36Z) - Deep-RBF Networks for Anomaly Detection in Automotive Cyber-Physical
Systems [1.8692254863855962]
We show how the deep-RBF network can be used for detecting anomalies in CPS regression tasks such as continuous steering predictions.
Our results show that the deep-RBF networks can robustly detect these attacks in a short time without additional resource requirements.
arXiv Detail & Related papers (2021-03-25T23:10:32Z) - Increasing the Confidence of Deep Neural Networks by Coverage Analysis [71.57324258813674]
This paper presents a lightweight monitoring architecture based on coverage paradigms to enhance the model against different unsafe inputs.
Experimental results show that the proposed approach is effective in detecting both powerful adversarial examples and out-of-distribution inputs.
arXiv Detail & Related papers (2021-01-28T16:38:26Z) - Adversarial Attacks on Deep Learning Based Power Allocation in a Massive
MIMO Network [62.77129284830945]
We show that adversarial attacks can break DL-based power allocation in the downlink of a massive multiple-input-multiple-output (maMIMO) network.
We benchmark the performance of these attacks and show that with a small perturbation in the input of the neural network (NN), the white-box attacks can result in infeasible solutions up to 86%.
arXiv Detail & Related papers (2021-01-28T16:18:19Z) - Measurement-driven Security Analysis of Imperceptible Impersonation
Attacks [54.727945432381716]
We study the exploitability of Deep Neural Network-based Face Recognition systems.
We show that factors such as skin color, gender, and age, impact the ability to carry out an attack on a specific target victim.
We also study the feasibility of constructing universal attacks that are robust to different poses or views of the attacker's face.
arXiv Detail & Related papers (2020-08-26T19:27:27Z) - GraN: An Efficient Gradient-Norm Based Detector for Adversarial and
Misclassified Examples [77.99182201815763]
Deep neural networks (DNNs) are vulnerable to adversarial examples and other data perturbations.
GraN is a time- and parameter-efficient method that is easily adaptable to any DNN.
GraN achieves state-of-the-art performance on numerous problem set-ups.
arXiv Detail & Related papers (2020-04-20T10:09:27Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.