Towards Robust Stability Prediction in Smart Grids: GAN-based Approach under Data Constraints and Adversarial Challenges
- URL: http://arxiv.org/abs/2501.16490v2
- Date: Tue, 24 Jun 2025 11:10:26 GMT
- Title: Towards Robust Stability Prediction in Smart Grids: GAN-based Approach under Data Constraints and Adversarial Challenges
- Authors: Emad Efatinasab, Alessandro Brighente, Denis Donadel, Mauro Conti, Mirco Rampazzo,
- Abstract summary: This paper introduces a novel framework for detecting instability in smart grids using only stable data.<n>It achieves up to 98.1% accuracy in predicting grid stability and 98.9% in detecting adversarial attacks.<n>Implemented on a single-board computer, it enables real-time decision-making with an average response time of under 7ms.
- Score: 53.2306792009435
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Smart grids are crucial for meeting rising energy demands driven by global population growth and urbanization. By integrating renewable energy sources, they enhance efficiency, reliability, and sustainability. However, ensuring their availability and security requires advanced operational control and safety measures. Although artificial intelligence and machine learning can help assess grid stability, challenges such as data scarcity and cybersecurity threats, particularly adversarial attacks, remain. Data scarcity is a major issue, as obtaining real-world instances of grid instability requires significant expertise, resources, and time. Yet, these instances are critical for testing new research advancements and security mitigations. This paper introduces a novel framework for detecting instability in smart grids using only stable data. It employs a Generative Adversarial Network (GAN) where the generator is designed not to produce near-realistic data but instead to generate Out-Of-Distribution (OOD) samples with respect to the stable class. These OOD samples represent unstable behavior, anomalies, or disturbances that deviate from the stable data distribution. By training exclusively on stable data and exposing the discriminator to OOD samples, our framework learns a robust decision boundary to distinguish stable conditions from any unstable behavior, without requiring unstable data during training. Furthermore, we incorporate an adversarial training layer to enhance resilience against attacks. Evaluated on a real-world dataset, our solution achieves up to 98.1\% accuracy in predicting grid stability and 98.9\% in detecting adversarial attacks. Implemented on a single-board computer, it enables real-time decision-making with an average response time of under 7ms.
Related papers
- Aurora: Are Android Malware Classifiers Reliable and Stable under Distribution Shift? [51.12297424766236]
AURORA is a framework to evaluate malware classifiers based on their confidence quality and operational resilience.<n>AURORA is complemented by a set of metrics designed to go beyond point-in-time performance.<n>The fragility in SOTA frameworks across datasets of varying drift suggests the need for a return to the whiteboard.
arXiv Detail & Related papers (2025-05-28T20:22:43Z) - Offline Robotic World Model: Learning Robotic Policies without a Physics Simulator [50.191655141020505]
Reinforcement Learning (RL) has demonstrated impressive capabilities in robotic control but remains challenging due to high sample complexity, safety concerns, and the sim-to-real gap.<n>We introduce Offline Robotic World Model (RWM-O), a model-based approach that explicitly estimates uncertainty to improve policy learning without reliance on a physics simulator.
arXiv Detail & Related papers (2025-04-23T12:58:15Z) - Simulation of Multi-Stage Attack and Defense Mechanisms in Smart Grids [2.0766068042442174]
We introduce a simulation environment that replicates the power grid's infrastructure and communication dynamics.<n>The framework generates diverse, realistic attack data to train machine learning algorithms for detecting and mitigating cyber threats.<n>It also provides a controlled, flexible platform to evaluate emerging security technologies, including advanced decision support systems.
arXiv Detail & Related papers (2024-12-09T07:07:17Z) - Digital Twin for Evaluating Detective Countermeasures in Smart Grid Cybersecurity [0.0]
This study delves into the potential of digital twins, replicating a smart grid's cyber-physical laboratory environment.<n>We introduce a flexible, comprehensive digital twin model equipped for hardware-in-the-loop evaluations.
arXiv Detail & Related papers (2024-12-05T08:41:08Z) - Explainability of Point Cloud Neural Networks Using SMILE: Statistical Model-Agnostic Interpretability with Local Explanations [0.0]
This study explores the implementation of SMILE, a novel explainability method originally designed for deep neural networks, on point cloud-based models.
The approach demonstrates superior performance in terms of fidelity loss, R2 scores, and robustness across various kernel widths, perturbation numbers, and clustering configurations.
The study further identifies dataset biases in the classification of the 'person' category, emphasizing the necessity for more comprehensive datasets in safety-critical applications.
arXiv Detail & Related papers (2024-10-20T12:13:59Z) - Towards Secure and Private AI: A Framework for Decentralized Inference [14.526663289437584]
Large multimodal foundational models present challenges in scalability, reliability, and potential misuse.<n>Decentralized systems offer a solution by distributing workload and mitigating central points of failure.<n>We address these challenges with a comprehensive framework designed for responsible AI development.
arXiv Detail & Related papers (2024-07-28T05:09:17Z) - GAN-GRID: A Novel Generative Attack on Smart Grid Stability Prediction [53.2306792009435]
We propose GAN-GRID a novel adversarial attack targeting the stability prediction system of a smart grid tailored to real-world constraints.
Our findings reveal that an adversary armed solely with the stability model's output, devoid of data or model knowledge, can craft data classified as stable with an Attack Success Rate (ASR) of 0.99.
arXiv Detail & Related papers (2024-05-20T14:43:46Z) - FaultGuard: A Generative Approach to Resilient Fault Prediction in Smart Electrical Grids [53.2306792009435]
FaultGuard is the first framework for fault type and zone classification resilient to adversarial attacks.
We propose a low-complexity fault prediction model and an online adversarial training technique to enhance robustness.
Our model outclasses the state-of-the-art for resilient fault prediction benchmarking, with an accuracy of up to 0.958.
arXiv Detail & Related papers (2024-03-26T08:51:23Z) - Enhanced Urban Region Profiling with Adversarial Self-Supervised Learning for Robust Forecasting and Security [12.8405655328298]
Existing methods often struggle with issues such as noise, data incompleteness, and security vulnerabilities.<n>This paper proposes a novel framework, Enhanced Urban Region Profiling with Adversarial Self-Supervised Learning (EUPAS)<n>EUPAS ensures robust performance across various forecasting tasks such as crime prediction, check-in prediction, and land use classification.
arXiv Detail & Related papers (2024-02-02T06:06:45Z) - Avoid Adversarial Adaption in Federated Learning by Multi-Metric
Investigations [55.2480439325792]
Federated Learning (FL) facilitates decentralized machine learning model training, preserving data privacy, lowering communication costs, and boosting model performance through diversified data sources.
FL faces vulnerabilities such as poisoning attacks, undermining model integrity with both untargeted performance degradation and targeted backdoor attacks.
We define a new notion of strong adaptive adversaries, capable of adapting to multiple objectives simultaneously.
MESAS is the first defense robust against strong adaptive adversaries, effective in real-world data scenarios, with an average overhead of just 24.37 seconds.
arXiv Detail & Related papers (2023-06-06T11:44:42Z) - Measuring and Mitigating Local Instability in Deep Neural Networks [23.342675028217762]
We study how the predictions of a model change, even when it is retrained on the same data, as a consequence of principledity in the training process.
For Natural Language Understanding (NLU) tasks, we find instability in predictions for a significant fraction of queries.
We propose new data-centric methods that exploit our local stability estimates.
arXiv Detail & Related papers (2023-05-18T00:34:15Z) - Toward Dynamic Stability Assessment of Power Grid Topologies using Graph
Neural Networks [0.0]
Renewables introduce new challenges to power grids regarding the dynamic stability due to decentralization, reduced inertia, and volatility in production.
graph neural networks (GNNs) are a promising method to reduce the computational effort of analyzing the dynamic stability of power grids.
GNNs are surprisingly effective at predicting the highly non-linear targets from topological information only.
arXiv Detail & Related papers (2022-06-10T07:23:22Z) - Can Adversarial Training Be Manipulated By Non-Robust Features? [64.73107315313251]
Adversarial training, originally designed to resist test-time adversarial examples, has shown to be promising in mitigating training-time availability attacks.
We identify a novel threat model named stability attacks, which aims to hinder robust availability by slightly perturbing the training data.
Under this threat, we find that adversarial training using a conventional defense budget $epsilon$ provably fails to provide test robustness in a simple statistical setting.
arXiv Detail & Related papers (2022-01-31T16:25:25Z) - Trustworthy AI [75.99046162669997]
Brittleness to minor adversarial changes in the input data, ability to explain the decisions, address the bias in their training data, are some of the most prominent limitations.
We propose the tutorial on Trustworthy AI to address six critical issues in enhancing user and public trust in AI systems.
arXiv Detail & Related papers (2020-11-02T20:04:18Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.