Investigating a Spectral Deception Loss Metric for Training Machine
Learning-based Evasion Attacks
- URL: http://arxiv.org/abs/2005.13124v1
- Date: Wed, 27 May 2020 02:02:03 GMT
- Title: Investigating a Spectral Deception Loss Metric for Training Machine
Learning-based Evasion Attacks
- Authors: Matthew DelVecchio, Vanessa Arndorfer, William C. Headley
- Abstract summary: Adversarial evasion attacks have been very successful in causing poor performance in a wide variety of machine learning applications.
This work introduces a new spectral deception loss metric that can be implemented during the training process to force the spectral shape to be more in-line with the original signal.
- Score: 1.3750624267664155
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Adversarial evasion attacks have been very successful in causing poor
performance in a wide variety of machine learning applications. One such
application is radio frequency spectrum sensing. While evasion attacks have
proven particularly successful in this area, they have done so at the detriment
of the signal's intended purpose. More specifically, for real-world
applications of interest, the resulting perturbed signal that is transmitted to
evade an eavesdropper must not deviate far from the original signal, less the
intended information is destroyed. Recent work by the authors and others has
demonstrated an attack framework that allows for intelligent balancing between
these conflicting goals of evasion and communication. However, while these
methodologies consider creating adversarial signals that minimize
communications degradation, they have been shown to do so at the expense of the
spectral shape of the signal. This opens the adversarial signal up to defenses
at the eavesdropper such as filtering, which could render the attack
ineffective. To remedy this, this work introduces a new spectral deception loss
metric that can be implemented during the training process to force the
spectral shape to be more in-line with the original signal. As an initial proof
of concept, a variety of methods are presented that provide a starting point
for this proposed loss. Through performance analysis, it is shown that these
techniques are effective in controlling the shape of the adversarial signal.
Related papers
- You Know What I'm Saying: Jailbreak Attack via Implicit Reference [22.520950422702757]
This study identifies a previously overlooked vulnerability, which we term Attack via Implicit Reference (AIR)
AIR decomposes a malicious objective into permissible objectives and links them through implicit references within the context.
Our experiments demonstrate AIR's effectiveness across state-of-the-art LLMs, achieving an attack success rate (ASR) exceeding 90% on most models.
arXiv Detail & Related papers (2024-10-04T18:42:57Z) - Detecting Adversarial Data via Perturbation Forgery [28.637963515748456]
adversarial detection aims to identify and filter out adversarial data from the data flow based on discrepancies in distribution and noise patterns between natural and adversarial data.
New attacks based on generative models with imbalanced and anisotropic noise patterns evade detection.
We propose Perturbation Forgery, which includes noise distribution perturbation, sparse mask generation, and pseudo-adversarial data production, to train an adversarial detector capable of detecting unseen gradient-based, generative-model-based, and physical adversarial attacks.
arXiv Detail & Related papers (2024-05-25T13:34:16Z) - Subspace Defense: Discarding Adversarial Perturbations by Learning a Subspace for Clean Signals [52.123343364599094]
adversarial attacks place carefully crafted perturbations on normal examples to fool deep neural networks (DNNs)
We first empirically show that the features of either clean signals or adversarial perturbations are redundant and span in low-dimensional linear subspaces respectively with minimal overlap.
This makes it possible for DNNs to learn a subspace where only features of clean signals exist while those of perturbations are discarded.
arXiv Detail & Related papers (2024-03-24T14:35:44Z) - FreqFed: A Frequency Analysis-Based Approach for Mitigating Poisoning
Attacks in Federated Learning [98.43475653490219]
Federated learning (FL) is susceptible to poisoning attacks.
FreqFed is a novel aggregation mechanism that transforms the model updates into the frequency domain.
We demonstrate that FreqFed can mitigate poisoning attacks effectively with a negligible impact on the utility of the aggregated model.
arXiv Detail & Related papers (2023-12-07T16:56:24Z) - Concealed Electronic Countermeasures of Radar Signal with Adversarial
Examples [7.460768868547269]
Electronic countermeasures involving radar signals are an important aspect of modern warfare.
Traditional electronic countermeasures techniques typically add large-scale interference signals to ensure interference effects, which can lead to attacks being too obvious.
In recent years, AI-based attack methods have emerged that can effectively solve this problem, but the attack scenarios are currently limited to time domain radar signal classification.
arXiv Detail & Related papers (2023-10-12T12:53:44Z) - Downlink Power Allocation in Massive MIMO via Deep Learning: Adversarial
Attacks and Training [62.77129284830945]
This paper considers a regression problem in a wireless setting and shows that adversarial attacks can break the DL-based approach.
We also analyze the effectiveness of adversarial training as a defensive technique in adversarial settings and show that the robustness of DL-based wireless system against attacks improves significantly.
arXiv Detail & Related papers (2022-06-14T04:55:11Z) - Adversarial Attacks and Defense Methods for Power Quality Recognition [16.27980559254687]
Power systems which use vulnerable machine learning methods face a huge threat against adversarial examples.
We first propose a signal-specific method and a universal signal-agnostic method to attack power systems using generated adversarial examples.
Black-box attacks based on transferable characteristics and the above two methods are also proposed and evaluated.
arXiv Detail & Related papers (2022-02-11T21:18:37Z) - No Need to Know Physics: Resilience of Process-based Model-free Anomaly
Detection for Industrial Control Systems [95.54151664013011]
We present a novel framework to generate adversarial spoofing signals that violate physical properties of the system.
We analyze four anomaly detectors published at top security conferences.
arXiv Detail & Related papers (2020-12-07T11:02:44Z) - Class-Conditional Defense GAN Against End-to-End Speech Attacks [82.21746840893658]
We propose a novel approach against end-to-end adversarial attacks developed to fool advanced speech-to-text systems such as DeepSpeech and Lingvo.
Unlike conventional defense approaches, the proposed approach does not directly employ low-level transformations such as autoencoding a given input signal.
Our defense-GAN considerably outperforms conventional defense algorithms in terms of word error rate and sentence level recognition accuracy.
arXiv Detail & Related papers (2020-10-22T00:02:02Z) - Temporal Sparse Adversarial Attack on Sequence-based Gait Recognition [56.844587127848854]
We demonstrate that the state-of-the-art gait recognition model is vulnerable to such attacks.
We employ a generative adversarial network based architecture to semantically generate adversarial high-quality gait silhouettes or video frames.
The experimental results show that if only one-fortieth of the frames are attacked, the accuracy of the target model drops dramatically.
arXiv Detail & Related papers (2020-02-22T10:08:42Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.