A Kolmogorov-Arnold Network for Interpretable Cyberattack Detection in AGC Systems
- URL: http://arxiv.org/abs/2509.05259v1
- Date: Fri, 05 Sep 2025 17:18:17 GMT
- Title: A Kolmogorov-Arnold Network for Interpretable Cyberattack Detection in AGC Systems
- Authors: Jehad Jilan, Niranjana Naveen Nambiar, Ahmad Mohammad Saber, Alok Paranjape, Amr Youssef, Deepa Kundur,
- Abstract summary: This work proposes Kolmogorov-Arnold Networks (KAN) as an interpretable and accurate method for FDIA detection in AGC systems.<n>KAN models include a method for extracting symbolic equations, and are thus able to provide more interpretability than the majority of machine learning models.<n>Our findings confirm that the proposed KAN model achieves FDIA detection rates of up to 95.97% and 95.9% for the initial model and the symbolic formula, respectively.
- Score: 1.5701326192371186
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Automatic Generation Control (AGC) is essential for power grid stability but remains vulnerable to stealthy cyberattacks, such as False Data Injection Attacks (FDIAs), which can disturb the system's stability while evading traditional detection methods. Unlike previous works that relied on blackbox approaches, this work proposes Kolmogorov-Arnold Networks (KAN) as an interpretable and accurate method for FDIA detection in AGC systems, considering the system nonlinearities. KAN models include a method for extracting symbolic equations, and are thus able to provide more interpretability than the majority of machine learning models. The proposed KAN is trained offline to learn the complex nonlinear relationships between the AGC measurements under different operating scenarios. After training, symbolic formulas that describe the trained model's behavior can be extracted and leveraged, greatly enhancing interpretability. Our findings confirm that the proposed KAN model achieves FDIA detection rates of up to 95.97% and 95.9% for the initial model and the symbolic formula, respectively, with a low false alarm rate, offering a reliable approach to enhancing AGC cybersecurity.
Related papers
- Contrastive-KAN: A Semi-Supervised Intrusion Detection Framework for Cybersecurity with scarce Labeled Data [0.0]
We propose a real-time intrusion detection system based on a semi-supervised contrastive learning framework.<n>Our method leverages abundant unlabeled data to effectively distinguish between normal and attack behaviors.<n> Experimental results show that our method outperforms existing contrastive learning-based approaches.
arXiv Detail & Related papers (2025-07-14T21:02:34Z) - Certified Neural Approximations of Nonlinear Dynamics [51.01318247729693]
In safety-critical contexts, the use of neural approximations requires formal bounds on their closeness to the underlying system.<n>We propose a novel, adaptive, and parallelizable verification method based on certified first-order models.
arXiv Detail & Related papers (2025-05-21T13:22:20Z) - Channel Fingerprint Construction for Massive MIMO: A Deep Conditional Generative Approach [65.47969413708344]
We introduce the concept of CF twins and design a conditional generative diffusion model (CGDM)<n>We employ a variational inference technique to derive the evidence lower bound (ELBO) for the log-marginal distribution of the observed fine-grained CF conditioned on the coarse-grained CF.<n>We show that the proposed approach exhibits significant improvement in reconstruction performance compared to the baselines.
arXiv Detail & Related papers (2025-05-12T01:36:06Z) - Machine Learning-Based Cyberattack Detection and Identification for Automatic Generation Control Systems Considering Nonlinearities [0.6144680854063939]
AGC systems' reliance on communicated measurements exposes them to false data injection attacks (FDIAs)<n>This paper proposes a machine learning (ML)-based detection framework that identifies FDIAs and determines the compromised measurements.<n>Our results demonstrate the efficacy of the proposed method in detecting FDIAs while maintaining a low false alarm rate, with an F1-score of up to 99.98%, outperforming existing approaches.
arXiv Detail & Related papers (2025-04-12T23:06:59Z) - An Unsupervised Adversarial Autoencoder for Cyber Attack Detection in Power Distribution Grids [0.0]
This paper proposes an unsupervised adversarial autoencoder (AAE) model to detect false data injection attacks (FDIAs) in unbalanced power distribution grids.
The proposed method utilizes long short-term memory (LSTM) in the structure of the autoencoder to capture the temporal dependencies in the time-series measurements.
It is tested on IEEE 13-bus and 123-bus systems with historical meteorological data and historical real-world load data.
arXiv Detail & Related papers (2024-03-31T01:20:01Z) - Grid Monitoring with Synchro-Waveform and AI Foundation Model Technologies [41.994460245857404]
This article advocates for the development of a next-generation grid monitoring and control system designed for future grids dominated by inverter-based resources.<n>We develop a physics-based AI foundation model with high-resolution synchro-waveform measurement technology to enhance grid resilience and reduce economic losses from outages.
arXiv Detail & Related papers (2024-03-11T17:28:46Z) - Deep Learning-Based Cyber-Attack Detection Model for Smart Grids [6.642400003243118]
A novel artificial intelligence-based cyber-attack detection model is developed to stop data integrity cyber-attacks (DIAs) on the received load data by supervisory control and data acquisition (SCADA)
In the proposed model, first the load data is forecasted using a regression model and after processing stage, the processed data is clustered using the unsupervised learning method.
The proposed EE-BiLSTM method can perform more robust and accurate compared to the other two methods.
arXiv Detail & Related papers (2023-12-14T10:54:04Z) - CC-Cert: A Probabilistic Approach to Certify General Robustness of
Neural Networks [58.29502185344086]
In safety-critical machine learning applications, it is crucial to defend models against adversarial attacks.
It is important to provide provable guarantees for deep learning models against semantically meaningful input transformations.
We propose a new universal probabilistic certification approach based on Chernoff-Cramer bounds.
arXiv Detail & Related papers (2021-09-22T12:46:04Z) - Firearm Detection via Convolutional Neural Networks: Comparing a
Semantic Segmentation Model Against End-to-End Solutions [68.8204255655161]
Threat detection of weapons and aggressive behavior from live video can be used for rapid detection and prevention of potentially deadly incidents.
One way for achieving this is through the use of artificial intelligence and, in particular, machine learning for image analysis.
We compare a traditional monolithic end-to-end deep learning model and a previously proposed model based on an ensemble of simpler neural networks detecting fire-weapons via semantic segmentation.
arXiv Detail & Related papers (2020-12-17T15:19:29Z) - How Robust are Randomized Smoothing based Defenses to Data Poisoning? [66.80663779176979]
We present a previously unrecognized threat to robust machine learning models that highlights the importance of training-data quality.
We propose a novel bilevel optimization-based data poisoning attack that degrades the robustness guarantees of certifiably robust classifiers.
Our attack is effective even when the victim trains the models from scratch using state-of-the-art robust training methods.
arXiv Detail & Related papers (2020-12-02T15:30:21Z) - Deep Learning based Covert Attack Identification for Industrial Control
Systems [5.299113288020827]
We develop a data-driven framework that can be used to detect, diagnose, and localize a type of cyberattack called covert attacks on smart grids.
The framework has a hybrid design that combines an autoencoder, a recurrent neural network (RNN) with a Long-Short-Term-Memory layer, and a Deep Neural Network (DNN)
arXiv Detail & Related papers (2020-09-25T17:48:43Z) - Uncertainty Estimation Using a Single Deep Deterministic Neural Network [66.26231423824089]
We propose a method for training a deterministic deep model that can find and reject out of distribution data points at test time with a single forward pass.
We scale training in these with a novel loss function and centroid updating scheme and match the accuracy of softmax models.
arXiv Detail & Related papers (2020-03-04T12:27:36Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.