Adversarial Classification of the Attacks on Smart Grids Using Game
Theory and Deep Learning
- URL: http://arxiv.org/abs/2106.03209v1
- Date: Sun, 6 Jun 2021 18:43:28 GMT
- Title: Adversarial Classification of the Attacks on Smart Grids Using Game
Theory and Deep Learning
- Authors: Kian Hamedani, Lingjia Liu, Jithin Jagannath, Yang (Cindy) Yi
- Abstract summary: This paper proposes a game-theoretic approach to evaluate the variations caused by an attacker on the power measurements.
A zero-sum game is used to model the interactions between the attacker and defender.
- Score: 27.69899235394942
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: Smart grids are vulnerable to cyber-attacks. This paper proposes a
game-theoretic approach to evaluate the variations caused by an attacker on the
power measurements. Adversaries can gain financial benefits through the
manipulation of the meters of smart grids. On the other hand, there is a
defender that tries to maintain the accuracy of the meters. A zero-sum game is
used to model the interactions between the attacker and defender. In this
paper, two different defenders are used and the effectiveness of each defender
in different scenarios is evaluated. Multi-layer perceptrons (MLPs) and
traditional state estimators are the two defenders that are studied in this
paper. The utility of the defender is also investigated in adversary-aware and
adversary-unaware situations. Our simulations suggest that the utility which is
gained by the adversary drops significantly when the MLP is used as the
defender. It will be shown that the utility of the defender is variant in
different scenarios, based on the defender that is being used. In the end, we
will show that this zero-sum game does not yield a pure strategy, and the mixed
strategy of the game is calculated.
Related papers
- Meta Invariance Defense Towards Generalizable Robustness to Unknown Adversarial Attacks [62.036798488144306]
Current defense mainly focuses on the known attacks, but the adversarial robustness to the unknown attacks is seriously overlooked.
We propose an attack-agnostic defense method named Meta Invariance Defense (MID)
We show that MID simultaneously achieves robustness to the imperceptible adversarial perturbations in high-level image classification and attack-suppression in low-level robust image regeneration.
arXiv Detail & Related papers (2024-04-04T10:10:38Z) - Counter-Samples: A Stateless Strategy to Neutralize Black Box Adversarial Attacks [2.9815109163161204]
Our paper presents a novel defence against black box attacks, where attackers use the victim model as an oracle to craft their adversarial examples.
Unlike traditional preprocessing defences that rely on sanitizing input samples, our strategy counters the attack process itself.
We demonstrate that our approach is remarkably effective against state-of-the-art black box attacks and outperforms existing defences for both the CIFAR-10 and ImageNet datasets.
arXiv Detail & Related papers (2024-03-14T10:59:54Z) - The Best Defense is a Good Offense: Adversarial Augmentation against
Adversarial Attacks [91.56314751983133]
$A5$ is a framework to craft a defensive perturbation to guarantee that any attack towards the input in hand will fail.
We show effective on-the-fly defensive augmentation with a robustifier network that ignores the ground truth label.
We also show how to apply $A5$ to create certifiably robust physical objects.
arXiv Detail & Related papers (2023-05-23T16:07:58Z) - Adversarial Machine Learning and Defense Game for NextG Signal
Classification with Deep Learning [1.1726528038065764]
NextG systems can employ deep neural networks (DNNs) for various tasks such as user equipment identification, physical layer authentication, and detection of incumbent users.
This paper presents a game-theoretic framework to study the interactions of attack and defense for deep learning-based NextG signal classification.
arXiv Detail & Related papers (2022-12-22T15:13:03Z) - The Art of Manipulation: Threat of Multi-Step Manipulative Attacks in
Security Games [8.87104231451079]
This paper studies the problem of multi-step manipulative attacks in Stackelberg security games.
A clever attacker attempts to orchestrate its attacks over multiple time steps to mislead the defender's learning of the attacker's behavior.
This attack manipulation eventually influences the defender's patrol strategy towards the attacker's benefit.
arXiv Detail & Related papers (2022-02-27T18:58:15Z) - Adversarial Online Learning with Variable Plays in the Pursuit-Evasion
Game: Theoretical Foundations and Application in Connected and Automated
Vehicle Cybersecurity [5.9774834479750805]
We extend the adversarial/non-stochastic multi-play multi-armed bandit (MPMAB) to the case where the number of arms to play is variable.
The work is motivated by the fact that the resources allocated to scan different critical locations in an interconnected transportation system change dynamically over time and depending on the environment.
arXiv Detail & Related papers (2021-10-26T23:09:42Z) - Game Theory for Adversarial Attacks and Defenses [0.0]
Adrial attacks can generate adversarial inputs by applying small but intentionally worst-case perturbations to samples from the dataset.
Some adversarial defense techniques are developed to improve the security and robustness of the models and avoid them being attacked.
arXiv Detail & Related papers (2021-10-08T07:38:33Z) - Universal Adversarial Training with Class-Wise Perturbations [78.05383266222285]
adversarial training is the most widely used method for defending against adversarial attacks.
In this work, we find that a UAP does not attack all classes equally.
We improve the SOTA UAT by proposing to utilize class-wise UAPs during adversarial training.
arXiv Detail & Related papers (2021-04-07T09:05:49Z) - Advocating for Multiple Defense Strategies against Adversarial Examples [66.90877224665168]
It has been empirically observed that defense mechanisms designed to protect neural networks against $ell_infty$ adversarial examples offer poor performance.
In this paper we conduct a geometrical analysis that validates this observation.
Then, we provide a number of empirical insights to illustrate the effect of this phenomenon in practice.
arXiv Detail & Related papers (2020-12-04T14:42:46Z) - Adversarial Example Games [51.92698856933169]
Adrial Example Games (AEG) is a framework that models the crafting of adversarial examples.
AEG provides a new way to design adversarial examples by adversarially training a generator and aversa from a given hypothesis class.
We demonstrate the efficacy of AEG on the MNIST and CIFAR-10 datasets.
arXiv Detail & Related papers (2020-07-01T19:47:23Z) - Harnessing adversarial examples with a surprisingly simple defense [47.64219291655723]
I introduce a very simple method to defend against adversarial examples.
The basic idea is to raise the slope of the ReLU function at the test time.
Experiments over MNIST and CIFAR-10 datasets demonstrate the effectiveness of the proposed defense.
arXiv Detail & Related papers (2020-04-26T03:09:42Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.