Adversarial Attacks and Defense Methods for Power Quality Recognition
- URL: http://arxiv.org/abs/2202.07421v1
- Date: Fri, 11 Feb 2022 21:18:37 GMT
- Title: Adversarial Attacks and Defense Methods for Power Quality Recognition
- Authors: Jiwei Tian and Buhong Wang and Jing Li and Zhen Wang and Mete Ozay
- Abstract summary: Power systems which use vulnerable machine learning methods face a huge threat against adversarial examples.
We first propose a signal-specific method and a universal signal-agnostic method to attack power systems using generated adversarial examples.
Black-box attacks based on transferable characteristics and the above two methods are also proposed and evaluated.
- Score: 16.27980559254687
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Vulnerability of various machine learning methods to adversarial examples has
been recently explored in the literature. Power systems which use these
vulnerable methods face a huge threat against adversarial examples. To this
end, we first propose a signal-specific method and a universal signal-agnostic
method to attack power systems using generated adversarial examples. Black-box
attacks based on transferable characteristics and the above two methods are
also proposed and evaluated. We then adopt adversarial training to defend
systems against adversarial attacks. Experimental analyses demonstrate that our
signal-specific attack method provides less perturbation compared to the FGSM
(Fast Gradient Sign Method), and our signal-agnostic attack method can generate
perturbations fooling most natural signals with high probability. What's more,
the attack method based on the universal signal-agnostic algorithm has a higher
transfer rate of black-box attacks than the attack method based on the
signal-specific algorithm. In addition, the results show that the proposed
adversarial training improves robustness of power systems to adversarial
examples.
Related papers
- SEEP: Training Dynamics Grounds Latent Representation Search for Mitigating Backdoor Poisoning Attacks [53.28390057407576]
Modern NLP models are often trained on public datasets drawn from diverse sources.
Data poisoning attacks can manipulate the model's behavior in ways engineered by the attacker.
Several strategies have been proposed to mitigate the risks associated with backdoor attacks.
arXiv Detail & Related papers (2024-05-19T14:50:09Z) - Mutual-modality Adversarial Attack with Semantic Perturbation [81.66172089175346]
We propose a novel approach that generates adversarial attacks in a mutual-modality optimization scheme.
Our approach outperforms state-of-the-art attack methods and can be readily deployed as a plug-and-play solution.
arXiv Detail & Related papers (2023-12-20T05:06:01Z) - MIXPGD: Hybrid Adversarial Training for Speech Recognition Systems [18.01556863687433]
We propose mixPGD adversarial training method to improve robustness of the model for ASR systems.
In standard adversarial training, adversarial samples are generated by leveraging supervised or unsupervised methods.
We merge the capabilities of both supervised and unsupervised approaches in our method to generate new adversarial samples which aid in improving model robustness.
arXiv Detail & Related papers (2023-03-10T07:52:28Z) - Guidance Through Surrogate: Towards a Generic Diagnostic Attack [101.36906370355435]
We develop a guided mechanism to avoid local minima during attack optimization, leading to a novel attack dubbed Guided Projected Gradient Attack (G-PGA)
Our modified attack does not require random restarts, large number of attack iterations or search for an optimal step-size.
More than an effective attack, G-PGA can be used as a diagnostic tool to reveal elusive robustness due to gradient masking in adversarial defenses.
arXiv Detail & Related papers (2022-12-30T18:45:23Z) - Adversarial Robustness of Deep Reinforcement Learning based Dynamic
Recommender Systems [50.758281304737444]
We propose to explore adversarial examples and attack detection on reinforcement learning-based interactive recommendation systems.
We first craft different types of adversarial examples by adding perturbations to the input and intervening on the casual factors.
Then, we augment recommendation systems by detecting potential attacks with a deep learning-based classifier based on the crafted data.
arXiv Detail & Related papers (2021-12-02T04:12:24Z) - Model-Agnostic Meta-Attack: Towards Reliable Evaluation of Adversarial
Robustness [53.094682754683255]
We propose a Model-Agnostic Meta-Attack (MAMA) approach to discover stronger attack algorithms automatically.
Our method learns the in adversarial attacks parameterized by a recurrent neural network.
We develop a model-agnostic training algorithm to improve the ability of the learned when attacking unseen defenses.
arXiv Detail & Related papers (2021-10-13T13:54:24Z) - TREATED:Towards Universal Defense against Textual Adversarial Attacks [28.454310179377302]
We propose TREATED, a universal adversarial detection method that can defend against attacks of various perturbation levels without making any assumptions.
Extensive experiments on three competitive neural networks and two widely used datasets show that our method achieves better detection performance than baselines.
arXiv Detail & Related papers (2021-09-13T03:31:20Z) - Adversarial example generation with AdaBelief Optimizer and Crop
Invariance [8.404340557720436]
Adversarial attacks can be an important method to evaluate and select robust models in safety-critical applications.
We propose AdaBelief Iterative Fast Gradient Method (ABI-FGM) and Crop-Invariant attack Method (CIM) to improve the transferability of adversarial examples.
Our method has higher success rates than state-of-the-art gradient-based attack methods.
arXiv Detail & Related papers (2021-02-07T06:00:36Z) - Adversarial Attacks and Detection on Reinforcement Learning-Based
Interactive Recommender Systems [47.70973322193384]
Adversarial attacks pose significant challenges for detecting them at an early stage.
We propose attack-agnostic detection on reinforcement learning-based interactive recommendation systems.
We first craft adversarial examples to show their diverse distributions and then augment recommendation systems by detecting potential attacks.
arXiv Detail & Related papers (2020-06-14T15:41:47Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.