A Stream Learning Approach for Real-Time Identification of False Data
Injection Attacks in Cyber-Physical Power Systems
- URL: http://arxiv.org/abs/2210.06729v1
- Date: Thu, 13 Oct 2022 04:53:01 GMT
- Title: A Stream Learning Approach for Real-Time Identification of False Data
Injection Attacks in Cyber-Physical Power Systems
- Authors: Ehsan Hallaji, Roozbeh Razavi-Far, Meng Wang, Mehrdad Saif, Bruce
Fardanesh
- Abstract summary: The proposed framework dynamically detects and classifies false data injection attacks.
It retrieves the control signal using the acquired information.
The framework is evaluated w.r.t. real-world data captured from the Central New York Power System.
- Score: 11.867912248195543
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: This paper presents a novel data-driven framework to aid in system state
estimation when the power system is under unobservable false data injection
attacks. The proposed framework dynamically detects and classifies false data
injection attacks. Then, it retrieves the control signal using the acquired
information. This process is accomplished in three main modules, with novel
designs, for detection, classification, and control signal retrieval. The
detection module monitors historical changes in phasor measurements and
captures any deviation pattern caused by an attack on a complex plane. This
approach can help to reveal characteristics of the attacks including the
direction, magnitude, and ratio of the injected false data. Using this
information, the signal retrieval module can easily recover the original
control signal and remove the injected false data. Further information
regarding the attack type can be obtained through the classifier module. The
proposed ensemble learner is compatible with harsh learning conditions
including the lack of labeled data, concept drift, concept evolution, recurring
classes, and independence from external updates. The proposed novel classifier
can dynamically learn from data and classify attacks under all these harsh
learning conditions. The introduced framework is evaluated w.r.t. real-world
data captured from the Central New York Power System. The obtained results
indicate the efficacy and stability of the proposed framework.
Related papers
- Unlearnable Examples Detection via Iterative Filtering [84.59070204221366]
Deep neural networks are proven to be vulnerable to data poisoning attacks.
It is quite beneficial and challenging to detect poisoned samples from a mixed dataset.
We propose an Iterative Filtering approach for UEs identification.
arXiv Detail & Related papers (2024-08-15T13:26:13Z) - An Adversarial Approach to Evaluating the Robustness of Event Identification Models [12.862865254507179]
This paper considers a physics-based modal decomposition method to extract features for event classification.
The resulting classifiers are tested against an adversarial algorithm to evaluate their robustness.
arXiv Detail & Related papers (2024-02-19T18:11:37Z) - FreqFed: A Frequency Analysis-Based Approach for Mitigating Poisoning
Attacks in Federated Learning [98.43475653490219]
Federated learning (FL) is susceptible to poisoning attacks.
FreqFed is a novel aggregation mechanism that transforms the model updates into the frequency domain.
We demonstrate that FreqFed can mitigate poisoning attacks effectively with a negligible impact on the utility of the aggregated model.
arXiv Detail & Related papers (2023-12-07T16:56:24Z) - Mitigating Data Injection Attacks on Federated Learning [20.24380409762923]
Federated learning is a technique that allows multiple entities to collaboratively train models using their data.
Despite its advantages, federated learning can be susceptible to false data injection attacks.
We propose a novel technique to detect and mitigate data injection attacks on federated learning systems.
arXiv Detail & Related papers (2023-12-04T18:26:31Z) - Data-Agnostic Model Poisoning against Federated Learning: A Graph
Autoencoder Approach [65.2993866461477]
This paper proposes a data-agnostic, model poisoning attack on Federated Learning (FL)
The attack requires no knowledge of FL training data and achieves both effectiveness and undetectability.
Experiments show that the FL accuracy drops gradually under the proposed attack and existing defense mechanisms fail to detect it.
arXiv Detail & Related papers (2023-11-30T12:19:10Z) - How adversarial attacks can disrupt seemingly stable accurate classifiers [76.95145661711514]
Adversarial attacks dramatically change the output of an otherwise accurate learning system using a seemingly inconsequential modification to a piece of input data.
Here, we show that this may be seen as a fundamental feature of classifiers working with high dimensional input data.
We introduce a simple generic and generalisable framework for which key behaviours observed in practical systems arise with high probability.
arXiv Detail & Related papers (2023-09-07T12:02:00Z) - Boosting Model Inversion Attacks with Adversarial Examples [26.904051413441316]
We propose a new training paradigm for a learning-based model inversion attack that can achieve higher attack accuracy in a black-box setting.
First, we regularize the training process of the attack model with an added semantic loss function.
Second, we inject adversarial examples into the training data to increase the diversity of the class-related parts.
arXiv Detail & Related papers (2023-06-24T13:40:58Z) - Federated Learning Based Distributed Localization of False Data
Injection Attacks on Smart Grids [5.705281336771011]
False data injection attack (FDIA) is one of the classes of attacks that target the smart measurement devices by injecting malicious data.
We propose a federated learning-based scheme combined with a hybrid deep neural network architecture.
We validate the proposed architecture by extensive simulations on the IEEE 57, 118, and 300 bus systems and real electricity load data.
arXiv Detail & Related papers (2023-06-17T20:29:55Z) - Exploring Model Dynamics for Accumulative Poisoning Discovery [62.08553134316483]
We propose a novel information measure, namely, Memorization Discrepancy, to explore the defense via the model-level information.
By implicitly transferring the changes in the data manipulation to that in the model outputs, Memorization Discrepancy can discover the imperceptible poison samples.
We thoroughly explore its properties and propose Discrepancy-aware Sample Correction (DSC) to defend against accumulative poisoning attacks.
arXiv Detail & Related papers (2023-06-06T14:45:24Z) - Enhancing Multiple Reliability Measures via Nuisance-extended
Information Bottleneck [77.37409441129995]
In practical scenarios where training data is limited, many predictive signals in the data can be rather from some biases in data acquisition.
We consider an adversarial threat model under a mutual information constraint to cover a wider class of perturbations in training.
We propose an autoencoder-based training to implement the objective, as well as practical encoder designs to facilitate the proposed hybrid discriminative-generative training.
arXiv Detail & Related papers (2023-03-24T16:03:21Z) - Online Dictionary Learning Based Fault and Cyber Attack Detection for
Power Systems [4.657875410615595]
This paper deals with the event and intrusion detection problem by leveraging a stream data mining classifier.
We first build a dictionary by learning higher-level features from unlabeled data.
Then, the labeled data are represented as sparse linear combinations of learned dictionary atoms.
We capitalize on those sparse codes to train the online classifier along with efficient change detectors.
arXiv Detail & Related papers (2021-08-24T23:17:58Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.