A Zero Trust Framework for Realization and Defense Against Generative AI
Attacks in Power Grid
- URL: http://arxiv.org/abs/2403.06388v1
- Date: Mon, 11 Mar 2024 02:47:21 GMT
- Title: A Zero Trust Framework for Realization and Defense Against Generative AI
Attacks in Power Grid
- Authors: Md. Shirajum Munir, Sravanthi Proddatoori, Manjushree Muralidhara,
Walid Saad, Zhu Han, Sachin Shetty
- Abstract summary: This paper proposes a novel zero trust framework for a power grid supply chain (PGSC)
It facilitates early detection of potential GenAI-driven attack vectors, assessment of tail risk-based stability measures, and mitigation of such threats.
Experimental results show that the proposed zero trust framework achieves an accuracy of 95.7% on attack vector generation, a risk measure of 9.61% for a 95% stable PGSC, and a 99% confidence in defense against GenAI-driven attack.
- Score: 62.91192307098067
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Understanding the potential of generative AI (GenAI)-based attacks on the
power grid is a fundamental challenge that must be addressed in order to
protect the power grid by realizing and validating risk in new attack vectors.
In this paper, a novel zero trust framework for a power grid supply chain
(PGSC) is proposed. This framework facilitates early detection of potential
GenAI-driven attack vectors (e.g., replay and protocol-type attacks),
assessment of tail risk-based stability measures, and mitigation of such
threats. First, a new zero trust system model of PGSC is designed and
formulated as a zero-trust problem that seeks to guarantee for a stable PGSC by
realizing and defending against GenAI-driven cyber attacks. Second, in which a
domain-specific generative adversarial networks (GAN)-based attack generation
mechanism is developed to create a new vulnerability cyberspace for further
understanding that threat. Third, tail-based risk realization metrics are
developed and implemented for quantifying the extreme risk of a potential
attack while leveraging a trust measurement approach for continuous validation.
Fourth, an ensemble learning-based bootstrap aggregation scheme is devised to
detect the attacks that are generating synthetic identities with convincing
user and distributed energy resources device profiles. Experimental results
show the efficacy of the proposed zero trust framework that achieves an
accuracy of 95.7% on attack vector generation, a risk measure of 9.61% for a
95% stable PGSC, and a 99% confidence in defense against GenAI-driven attack.
Related papers
- GAN-GRID: A Novel Generative Attack on Smart Grid Stability Prediction [53.2306792009435]
We propose GAN-GRID a novel adversarial attack targeting the stability prediction system of a smart grid tailored to real-world constraints.
Our findings reveal that an adversary armed solely with the stability model's output, devoid of data or model knowledge, can craft data classified as stable with an Attack Success Rate (ASR) of 0.99.
arXiv Detail & Related papers (2024-05-20T14:43:46Z) - FaultGuard: A Generative Approach to Resilient Fault Prediction in Smart Electrical Grids [53.2306792009435]
FaultGuard is the first framework for fault type and zone classification resilient to adversarial attacks.
We propose a low-complexity fault prediction model and an online adversarial training technique to enhance robustness.
Our model outclasses the state-of-the-art for resilient fault prediction benchmarking, with an accuracy of up to 0.958.
arXiv Detail & Related papers (2024-03-26T08:51:23Z) - Securing Federated Learning with Control-Flow Attestation: A Novel Framework for Enhanced Integrity and Resilience against Adversarial Attacks [2.28438857884398]
Federated Learning (FL) as a distributed machine learning paradigm has introduced new cybersecurity challenges.
This study proposes an innovative security framework inspired by Control-Flow (CFA) mechanisms, traditionally used in cybersecurity.
We authenticate and verify the integrity of model updates across the network, effectively mitigating risks associated with model poisoning and adversarial interference.
arXiv Detail & Related papers (2024-03-15T04:03:34Z) - Trustworthy Artificial Intelligence Framework for Proactive Detection
and Risk Explanation of Cyber Attacks in Smart Grid [11.122588110362706]
The rapid growth of distributed energy resources (DERs) poses significant cybersecurity and trust challenges to the grid controller.
To enable a trustworthy smart grid controller, this work investigates a trustworthy artificial intelligence (AI) mechanism for proactive identification and explanation of the cyber risk caused by the control/status message of DERs.
arXiv Detail & Related papers (2023-06-12T02:28:17Z) - Certifiers Make Neural Networks Vulnerable to Availability Attacks [70.69104148250614]
We show for the first time that fallback strategies can be deliberately triggered by an adversary.
In addition to naturally occurring abstains for some inputs and perturbations, the adversary can use training-time attacks to deliberately trigger the fallback.
We design two novel availability attacks, which show the practical relevance of these threats.
arXiv Detail & Related papers (2021-08-25T15:49:10Z) - The Feasibility and Inevitability of Stealth Attacks [63.14766152741211]
We study new adversarial perturbations that enable an attacker to gain control over decisions in generic Artificial Intelligence systems.
In contrast to adversarial data modification, the attack mechanism we consider here involves alterations to the AI system itself.
arXiv Detail & Related papers (2021-06-26T10:50:07Z) - A Self-supervised Approach for Adversarial Robustness [105.88250594033053]
Adversarial examples can cause catastrophic mistakes in Deep Neural Network (DNNs) based vision systems.
This paper proposes a self-supervised adversarial training mechanism in the input space.
It provides significant robustness against the textbfunseen adversarial attacks.
arXiv Detail & Related papers (2020-06-08T20:42:39Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.