Large Language Models for Detecting Cyberattacks on Smart Grid Protective Relays
- URL: http://arxiv.org/abs/2601.04443v1
- Date: Wed, 07 Jan 2026 23:12:03 GMT
- Title: Large Language Models for Detecting Cyberattacks on Smart Grid Protective Relays
- Authors: Ahmad Mohammad Saber, Saeed Jafari, Zhengmao Ouyang, Paul Budnarain, Amr Youssef, Deepa Kundur,
- Abstract summary: DistilBERT detects 97.6% of cyberattacks without compromising TCDR dependability.<n>DistilBERT achieves latency below 6 ms on a commercial workstation.<n>GPT-2 and DistilBERT+LoRA achieve comparable performance.
- Score: 1.6809655100067975
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: This paper presents a large language model (LLM)-based framework for detecting cyberattacks on transformer current differential relays (TCDRs), which, if undetected, may trigger false tripping of critical transformers. The proposed approach adapts and fine-tunes compact LLMs such as DistilBERT to distinguish cyberattacks from actual faults using textualized multidimensional TCDR current measurements recorded before and after tripping. Our results demonstrate that DistilBERT detects 97.6% of cyberattacks without compromising TCDR dependability and achieves inference latency below 6 ms on a commercial workstation. Additional evaluations confirm the framework's robustness under combined time-synchronization and false-data-injection attacks, resilience to measurement noise, and stability across prompt formulation variants. Furthermore, GPT-2 and DistilBERT+LoRA achieve comparable performance, highlighting the potential of LLMs for enhancing smart grid cybersecurity. We provide the full dataset used in this study for reproducibility.
Related papers
- Multi-Agent-Driven Cognitive Secure Communications in Satellite-Terrestrial Networks [58.70163955407538]
Malicious eavesdroppers pose a serious threat to private information via satellite-terrestrial networks (STNs)<n>We propose a cognitive secure communication framework driven by multiple agents that coordinates spectrum scheduling and protection through real-time sensing.<n>We exploit generative adversarial networks to produce adversarial matrices, and employ learning-aided power control to set real and adversarial signal powers for protection layer.
arXiv Detail & Related papers (2026-01-06T10:30:41Z) - DiffuGuard: How Intrinsic Safety is Lost and Found in Diffusion Large Language Models [50.21378052667732]
We conduct an in-depth analysis of dLLM vulnerabilities to jailbreak attacks across two distinct dimensions: intra-step and inter-step dynamics.<n>We propose DiffuGuard, a training-free defense framework that addresses vulnerabilities through a dual-stage approach.
arXiv Detail & Related papers (2025-09-29T05:17:10Z) - Hybrid LLM-Enhanced Intrusion Detection for Zero-Day Threats in IoT Networks [6.087274577167399]
This paper presents a novel approach to intrusion detection by integrating traditional signature-based methods with the contextual understanding capabilities of the GPT-2 Large Language Model (LLM)<n>We propose a hybrid IDS framework that merges the robustness of signature-based techniques with the adaptability of GPT-2-driven semantic analysis.<n> Experimental evaluations on a representative intrusion dataset demonstrate that our model enhances detection accuracy by 6.3%, reduces false positives by 9.0%, and maintains near real-time responsiveness.
arXiv Detail & Related papers (2025-07-10T04:10:03Z) - Adversarial Attacks on Deep Learning-Based False Data Injection Detection in Differential Relays [3.4061238650474666]
This paper demonstrates that adversarial attacks, carefully crafted FDIAs, can evade existing Deep Learning-based Schemes (DLSs) used for False Data Injection Attacks (FDIAs) in smart grids.<n>We propose a novel adversarial attack framework, utilizing the Fast Gradient Sign Method, which exploits DLS vulnerabilities.<n>Our results highlight the significant threat posed by adversarial attacks to DLS-based FDIA detection, underscore the necessity for robust cybersecurity measures in smart grids, and demonstrate the effectiveness of adversarial training in enhancing model robustness against adversarial FDIAs.
arXiv Detail & Related papers (2025-06-24T04:22:26Z) - MISLEADER: Defending against Model Extraction with Ensembles of Distilled Models [56.09354775405601]
Model extraction attacks aim to replicate the functionality of a black-box model through query access.<n>Most existing defenses presume that attacker queries have out-of-distribution (OOD) samples, enabling them to detect and disrupt suspicious inputs.<n>We propose MISLEADER, a novel defense strategy that does not rely on OOD assumptions.
arXiv Detail & Related papers (2025-06-03T01:37:09Z) - Data-Driven Lipschitz Continuity: A Cost-Effective Approach to Improve Adversarial Robustness [47.9744734181236]
We explore the concept of Lipschitz continuity to certify the robustness of deep neural networks (DNNs) against adversarial attacks.
We propose a novel algorithm that remaps the input domain into a constrained range, reducing the Lipschitz constant and potentially enhancing robustness.
Our method achieves the best robust accuracy for CIFAR10, CIFAR100, and ImageNet datasets on the RobustBench leaderboard.
arXiv Detail & Related papers (2024-06-28T03:10:36Z) - Enhancing IoT Security with CNN and LSTM-Based Intrusion Detection Systems [0.23408308015481666]
Our proposed model consists on a combination of convolutional neural network (CNN) and long short-term memory (LSTM) deep learning (DL) models.
This fusion facilitates the detection and classification of IoT traffic into binary categories, benign and malicious activities.
Our proposed model achieves an accuracy rate of 98.42%, accompanied by a minimal loss of 0.0275.
arXiv Detail & Related papers (2024-05-28T22:12:15Z) - LogShield: A Transformer-based APT Detection System Leveraging
Self-Attention [2.1256044139613772]
This paper proposes LogShield, a framework designed to detect APT attack patterns leveraging the power of self-attention in transformers.
We incorporate customized embedding layers to effectively capture the context of event sequences derived from provenance graphs.
Our framework achieved superior F1 scores of 98% and 95% on the two datasets respectively, surpassing the F1 scores of 96% and 94% obtained by LSTM models.
arXiv Detail & Related papers (2023-11-09T20:43:15Z) - Revolutionizing Cyber Threat Detection with Large Language Models: A
privacy-preserving BERT-based Lightweight Model for IoT/IIoT Devices [3.340416780217405]
This paper presents SecurityBERT, a novel architecture that leverages the Bidirectional Representations from Transformers (BERT) model for cyber threat detection in IoT networks.
Our research demonstrates that SecurityBERT outperforms traditional Machine Learning (ML) and Deep Learning (DL) methods, such as Convolutional Neural Networks (CNNIoTs) or Recurrent Neural Networks (IoTRNNs) in cyber threat detection.
SecurityBERT achieved an impressive 98.2% overall accuracy in identifying fourteen distinct attack types, surpassing previous records set by hybrid solutions.
arXiv Detail & Related papers (2023-06-25T15:04:21Z) - Improving Adversarial Robustness to Sensitivity and Invariance Attacks
with Deep Metric Learning [80.21709045433096]
A standard method in adversarial robustness assumes a framework to defend against samples crafted by minimally perturbing a sample.
We use metric learning to frame adversarial regularization as an optimal transport problem.
Our preliminary results indicate that regularizing over invariant perturbations in our framework improves both invariant and sensitivity defense.
arXiv Detail & Related papers (2022-11-04T13:54:02Z) - From Environmental Sound Representation to Robustness of 2D CNN Models
Against Adversarial Attacks [82.21746840893658]
This paper investigates the impact of different standard environmental sound representations (spectrograms) on the recognition performance and adversarial attack robustness of a victim residual convolutional neural network.
We show that while the ResNet-18 model trained on DWT spectrograms achieves a high recognition accuracy, attacking this model is relatively more costly for the adversary.
arXiv Detail & Related papers (2022-04-14T15:14:08Z) - From Sound Representation to Model Robustness [82.21746840893658]
We investigate the impact of different standard environmental sound representations (spectrograms) on the recognition performance and adversarial attack robustness of a victim residual convolutional neural network.
Averaged over various experiments on three environmental sound datasets, we found the ResNet-18 model outperforms other deep learning architectures.
arXiv Detail & Related papers (2020-07-27T17:30:49Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.