GAN-GRID: A Novel Generative Attack on Smart Grid Stability Prediction
- URL: http://arxiv.org/abs/2405.12076v1
- Date: Mon, 20 May 2024 14:43:46 GMT
- Title: GAN-GRID: A Novel Generative Attack on Smart Grid Stability Prediction
- Authors: Emad Efatinasab, Alessandro Brighente, Mirco Rampazzo, Nahal Azadi, Mauro Conti,
- Abstract summary: We propose GAN-GRID a novel adversarial attack targeting the stability prediction system of a smart grid tailored to real-world constraints.
Our findings reveal that an adversary armed solely with the stability model's output, devoid of data or model knowledge, can craft data classified as stable with an Attack Success Rate (ASR) of 0.99.
- Score: 53.2306792009435
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: The smart grid represents a pivotal innovation in modernizing the electricity sector, offering an intelligent, digitalized energy network capable of optimizing energy delivery from source to consumer. It hence represents the backbone of the energy sector of a nation. Due to its central role, the availability of the smart grid is paramount and is hence necessary to have in-depth control of its operations and safety. To this aim, researchers developed multiple solutions to assess the smart grid's stability and guarantee that it operates in a safe state. Artificial intelligence and Machine learning algorithms have proven to be effective measures to accurately predict the smart grid's stability. Despite the presence of known adversarial attacks and potential solutions, currently, there exists no standardized measure to protect smart grids against this threat, leaving them open to new adversarial attacks. In this paper, we propose GAN-GRID a novel adversarial attack targeting the stability prediction system of a smart grid tailored to real-world constraints. Our findings reveal that an adversary armed solely with the stability model's output, devoid of data or model knowledge, can craft data classified as stable with an Attack Success Rate (ASR) of 0.99. Also by manipulating authentic data and sensor values, the attacker can amplify grid issues, potentially undetected due to a compromised stability prediction system. These results underscore the imperative of fortifying smart grid security mechanisms against adversarial manipulation to uphold system stability and reliability.
Related papers
- Smart Grid Security: A Verified Deep Reinforcement Learning Framework to Counter Cyber-Physical Attacks [2.159496955301211]
Smart grids are vulnerable to strategically crafted cyber-physical attacks.
Malicious attacks can manipulate power demands using high-wattage Internet of Things (IoT) botnet devices.
Grid operators overlook potential scenarios of cyber-physical attacks during their design phase.
We propose a safe Deep Reinforcement Learning (DRL)-based framework for mitigating attacks on smart grids.
arXiv Detail & Related papers (2024-09-24T05:26:20Z) - Cybersecurity for Modern Smart Grid against Emerging Threats [10.342330124012122]
The book focuses on the sources of the cybersecurity issues, the taxonomy of threats, and the survey of various approaches to overcome or mitigate such threats.
It covers the state-of-the-art research results in recent years, along with remaining open challenges.
arXiv Detail & Related papers (2024-04-06T01:31:33Z) - FaultGuard: A Generative Approach to Resilient Fault Prediction in Smart Electrical Grids [53.2306792009435]
FaultGuard is the first framework for fault type and zone classification resilient to adversarial attacks.
We propose a low-complexity fault prediction model and an online adversarial training technique to enhance robustness.
Our model outclasses the state-of-the-art for resilient fault prediction benchmarking, with an accuracy of up to 0.958.
arXiv Detail & Related papers (2024-03-26T08:51:23Z) - A Zero Trust Framework for Realization and Defense Against Generative AI
Attacks in Power Grid [62.91192307098067]
This paper proposes a novel zero trust framework for a power grid supply chain (PGSC)
It facilitates early detection of potential GenAI-driven attack vectors, assessment of tail risk-based stability measures, and mitigation of such threats.
Experimental results show that the proposed zero trust framework achieves an accuracy of 95.7% on attack vector generation, a risk measure of 9.61% for a 95% stable PGSC, and a 99% confidence in defense against GenAI-driven attack.
arXiv Detail & Related papers (2024-03-11T02:47:21Z) - Trustworthy Artificial Intelligence Framework for Proactive Detection
and Risk Explanation of Cyber Attacks in Smart Grid [11.122588110362706]
The rapid growth of distributed energy resources (DERs) poses significant cybersecurity and trust challenges to the grid controller.
To enable a trustworthy smart grid controller, this work investigates a trustworthy artificial intelligence (AI) mechanism for proactive identification and explanation of the cyber risk caused by the control/status message of DERs.
arXiv Detail & Related papers (2023-06-12T02:28:17Z) - Recursively Feasible Probabilistic Safe Online Learning with Control Barrier Functions [60.26921219698514]
We introduce a model-uncertainty-aware reformulation of CBF-based safety-critical controllers.
We then present the pointwise feasibility conditions of the resulting safety controller.
We use these conditions to devise an event-triggered online data collection strategy.
arXiv Detail & Related papers (2022-08-23T05:02:09Z) - Certifiers Make Neural Networks Vulnerable to Availability Attacks [70.69104148250614]
We show for the first time that fallback strategies can be deliberately triggered by an adversary.
In addition to naturally occurring abstains for some inputs and perturbations, the adversary can use training-time attacks to deliberately trigger the fallback.
We design two novel availability attacks, which show the practical relevance of these threats.
arXiv Detail & Related papers (2021-08-25T15:49:10Z) - The Feasibility and Inevitability of Stealth Attacks [63.14766152741211]
We study new adversarial perturbations that enable an attacker to gain control over decisions in generic Artificial Intelligence systems.
In contrast to adversarial data modification, the attack mechanism we consider here involves alterations to the AI system itself.
arXiv Detail & Related papers (2021-06-26T10:50:07Z) - A Taxonomy of Cyber Defence Strategies Against False Data Attacks in
Smart Grid [3.88835600711547]
Modern electric power grid, known as the Smart Grid, has fast transformed the isolated and centrally controlled power system to a fast and massively connected cyber-physical system.
The synergy of a vast number of cyber-physical entities has allowed the Smart Grid to be much more effective and sustainable in meeting the growing global energy challenges.
However, it has also brought with it a large number of vulnerabilities resulting in breaches of data integrity, confidentiality and availability.
arXiv Detail & Related papers (2021-03-30T05:36:09Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.