Adversarial Purification for Data-Driven Power System Event Classifiers
with Diffusion Models
- URL: http://arxiv.org/abs/2311.07110v1
- Date: Mon, 13 Nov 2023 06:52:56 GMT
- Title: Adversarial Purification for Data-Driven Power System Event Classifiers
with Diffusion Models
- Authors: Yuanbin Cheng, Koji Yamashita, Jim Follum, Nanpeng Yu
- Abstract summary: Global deployment of phasor measurement units (PMUs) enables real-time monitoring of the power system.
Recent studies reveal that machine learning-based methods are vulnerable to adversarial attacks.
This paper proposes an effective adversarial purification method based on the diffusion model to counter adversarial attacks.
- Score: 0.8848340429852071
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: The global deployment of the phasor measurement units (PMUs) enables
real-time monitoring of the power system, which has stimulated considerable
research into machine learning-based models for event detection and
classification. However, recent studies reveal that machine learning-based
methods are vulnerable to adversarial attacks, which can fool the event
classifiers by adding small perturbations to the raw PMU data. To mitigate the
threats posed by adversarial attacks, research on defense strategies is
urgently needed. This paper proposes an effective adversarial purification
method based on the diffusion model to counter adversarial attacks on the
machine learning-based power system event classifier. The proposed method
includes two steps: injecting noise into the PMU data; and utilizing a
pre-trained neural network to eliminate the added noise while simultaneously
removing perturbations introduced by the adversarial attacks. The proposed
adversarial purification method significantly increases the accuracy of the
event classifier under adversarial attacks while satisfying the requirements of
real-time operations. In addition, the theoretical analysis reveals that the
proposed diffusion model-based adversarial purification method decreases the
distance between the original and compromised PMU data, which reduces the
impacts of adversarial attacks. The empirical results on a large-scale
real-world PMU dataset validate the effectiveness and computational efficiency
of the proposed adversarial purification method.
Related papers
- Correlation Analysis of Adversarial Attack in Time Series Classification [6.117704456424016]
This study investigates the vulnerability of time series classification models to adversarial attacks.
Regularization techniques and noise introduction are shown to enhance the effectiveness of attacks.
Models designed to prioritize global information are revealed to possess greater resistance to adversarial manipulations.
arXiv Detail & Related papers (2024-08-21T01:11:32Z) - Classifier Guidance Enhances Diffusion-based Adversarial Purification by Preserving Predictive Information [75.36597470578724]
Adversarial purification is one of the promising approaches to defend neural networks against adversarial attacks.
We propose gUided Purification (COUP) algorithm, which purifies while keeping away from the classifier decision boundary.
Experimental results show that COUP can achieve better adversarial robustness under strong attack methods.
arXiv Detail & Related papers (2024-08-12T02:48:00Z) - Diffusion-based Adversarial Purification for Intrusion Detection [0.6990493129893112]
crafted perturbations mislead ML models, enabling attackers to evade detection or trigger false alerts.
adversarial purification has emerged as a compelling solution, particularly with diffusion models showing promising results.
This paper demonstrates the effectiveness of diffusion models in purifying adversarial examples in network intrusion detection.
arXiv Detail & Related papers (2024-06-25T14:48:28Z) - FreqFed: A Frequency Analysis-Based Approach for Mitigating Poisoning
Attacks in Federated Learning [98.43475653490219]
Federated learning (FL) is susceptible to poisoning attacks.
FreqFed is a novel aggregation mechanism that transforms the model updates into the frequency domain.
We demonstrate that FreqFed can mitigate poisoning attacks effectively with a negligible impact on the utility of the aggregated model.
arXiv Detail & Related papers (2023-12-07T16:56:24Z) - Enhancing Adversarial Robustness via Score-Based Optimization [22.87882885963586]
Adversarial attacks have the potential to mislead deep neural network classifiers by introducing slight perturbations.
We introduce a novel adversarial defense scheme named ScoreOpt, which optimize adversarial samples at test-time.
Our experimental results demonstrate that our approach outperforms existing adversarial defenses in terms of both performance and robustness speed.
arXiv Detail & Related papers (2023-07-10T03:59:42Z) - Guided Diffusion Model for Adversarial Purification [103.4596751105955]
Adversarial attacks disturb deep neural networks (DNNs) in various algorithms and frameworks.
We propose a novel purification approach, referred to as guided diffusion model for purification (GDMP)
On our comprehensive experiments across various datasets, the proposed GDMP is shown to reduce the perturbations raised by adversarial attacks to a shallow range.
arXiv Detail & Related papers (2022-05-30T10:11:15Z) - Improving robustness of jet tagging algorithms with adversarial training [56.79800815519762]
We investigate the vulnerability of flavor tagging algorithms via application of adversarial attacks.
We present an adversarial training strategy that mitigates the impact of such simulated attacks.
arXiv Detail & Related papers (2022-03-25T19:57:19Z) - TFDPM: Attack detection for cyber-physical systems with diffusion
probabilistic models [10.389972581904999]
We propose TFDPM, a general framework for attack detection tasks in CPSs.
It simultaneously extracts temporal pattern and feature pattern given the historical data.
The noise scheduling network increases the detection speed by three times.
arXiv Detail & Related papers (2021-12-20T13:13:29Z) - Balancing detectability and performance of attacks on the control
channel of Markov Decision Processes [77.66954176188426]
We investigate the problem of designing optimal stealthy poisoning attacks on the control channel of Markov decision processes (MDPs)
This research is motivated by the recent interest of the research community for adversarial and poisoning attacks applied to MDPs, and reinforcement learning (RL) methods.
arXiv Detail & Related papers (2021-09-15T09:13:10Z) - A Hamiltonian Monte Carlo Method for Probabilistic Adversarial Attack
and Learning [122.49765136434353]
We present an effective method, called Hamiltonian Monte Carlo with Accumulated Momentum (HMCAM), aiming to generate a sequence of adversarial examples.
We also propose a new generative method called Contrastive Adversarial Training (CAT), which approaches equilibrium distribution of adversarial examples.
Both quantitative and qualitative analysis on several natural image datasets and practical systems have confirmed the superiority of the proposed algorithm.
arXiv Detail & Related papers (2020-10-15T16:07:26Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.