Test-time adversarial detection and robustness for localizing humans
using ultra wide band channel impulse responses
- URL: http://arxiv.org/abs/2211.05854v1
- Date: Thu, 10 Nov 2022 20:21:43 GMT
- Title: Test-time adversarial detection and robustness for localizing humans
using ultra wide band channel impulse responses
- Authors: Abhiram Kolli, Muhammad Jehanzeb Mirza, Horst Possegger, Horst Bischof
- Abstract summary: We propose a test-time adversarial example detector which detects the input adversarial example through quantifying the localized intermediate responses of a pre-trained neural network.
In order to make the network robust, we extenuate the non-relevant features by non-iterative input sample clipping.
- Score: 5.96002531660335
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Keyless entry systems in cars are adopting neural networks for localizing its
operators. Using test-time adversarial defences equip such systems with the
ability to defend against adversarial attacks without prior training on
adversarial samples. We propose a test-time adversarial example detector which
detects the input adversarial example through quantifying the localized
intermediate responses of a pre-trained neural network and confidence scores of
an auxiliary softmax layer. Furthermore, in order to make the network robust,
we extenuate the non-relevant features by non-iterative input sample clipping.
Using our approach, mean performance over 15 levels of adversarial
perturbations is increased by 55.33% for the fast gradient sign method (FGSM)
and 6.3% for both the basic iterative method (BIM) and the projected gradient
method (PGD).
Related papers
- Robust Image Classification in the Presence of Out-of-Distribution and Adversarial Samples Using Attractors in Neural Networks [0.0]
A fully connected neural network is trained to use training samples as its attractors, enhancing its robustness.
The results indicate that the network maintains its performance even when classifying adversarial examples.
In the presence of severe adversarial attacks, these measures decrease slightly to 98.48% and 98.88%, indicating the robustness of the proposed method.
arXiv Detail & Related papers (2024-06-15T09:38:41Z) - Robust Localization of Key Fob Using Channel Impulse Response of Ultra
Wide Band Sensors for Keyless Entry Systems [12.313730356985019]
Using neural networks for localization of key fob within and surrounding a car as a security feature for keyless entry is fast emerging.
The model's performance improved by 67% at certain ranges of adversarial magnitude for fast gradient sign method and 37% each for basic iterative method and projected gradient descent method.
arXiv Detail & Related papers (2024-01-16T22:35:14Z) - How adversarial attacks can disrupt seemingly stable accurate classifiers [76.95145661711514]
Adversarial attacks dramatically change the output of an otherwise accurate learning system using a seemingly inconsequential modification to a piece of input data.
Here, we show that this may be seen as a fundamental feature of classifiers working with high dimensional input data.
We introduce a simple generic and generalisable framework for which key behaviours observed in practical systems arise with high probability.
arXiv Detail & Related papers (2023-09-07T12:02:00Z) - Distributed Adversarial Training to Robustify Deep Neural Networks at
Scale [100.19539096465101]
Current deep neural networks (DNNs) are vulnerable to adversarial attacks, where adversarial perturbations to the inputs can change or manipulate classification.
To defend against such attacks, an effective approach, known as adversarial training (AT), has been shown to mitigate robust training.
We propose a large-batch adversarial training framework implemented over multiple machines.
arXiv Detail & Related papers (2022-06-13T15:39:43Z) - Robustness against Adversarial Attacks in Neural Networks using
Incremental Dissipativity [3.8673567847548114]
Adversarial examples can easily degrade the classification performance in neural networks.
This work proposes an incremental dissipativity-based robustness certificate for neural networks.
arXiv Detail & Related papers (2021-11-25T04:42:57Z) - Adversarial Examples Detection with Bayesian Neural Network [57.185482121807716]
We propose a new framework to detect adversarial examples motivated by the observations that random components can improve the smoothness of predictors.
We propose a novel Bayesian adversarial example detector, short for BATer, to improve the performance of adversarial example detection.
arXiv Detail & Related papers (2021-05-18T15:51:24Z) - Combating Adversaries with Anti-Adversaries [118.70141983415445]
In particular, our layer generates an input perturbation in the opposite direction of the adversarial one.
We verify the effectiveness of our approach by combining our layer with both nominally and robustly trained models.
Our anti-adversary layer significantly enhances model robustness while coming at no cost on clean accuracy.
arXiv Detail & Related papers (2021-03-26T09:36:59Z) - Increasing the Confidence of Deep Neural Networks by Coverage Analysis [71.57324258813674]
This paper presents a lightweight monitoring architecture based on coverage paradigms to enhance the model against different unsafe inputs.
Experimental results show that the proposed approach is effective in detecting both powerful adversarial examples and out-of-distribution inputs.
arXiv Detail & Related papers (2021-01-28T16:38:26Z) - Class-Conditional Defense GAN Against End-to-End Speech Attacks [82.21746840893658]
We propose a novel approach against end-to-end adversarial attacks developed to fool advanced speech-to-text systems such as DeepSpeech and Lingvo.
Unlike conventional defense approaches, the proposed approach does not directly employ low-level transformations such as autoencoding a given input signal.
Our defense-GAN considerably outperforms conventional defense algorithms in terms of word error rate and sentence level recognition accuracy.
arXiv Detail & Related papers (2020-10-22T00:02:02Z) - FADER: Fast Adversarial Example Rejection [19.305796826768425]
Recent defenses have been shown to improve adversarial robustness by detecting anomalous deviations from legitimate training samples at different layer representations.
We introduce FADER, a novel technique for speeding up detection-based methods.
Our experiments outline up to 73x prototypes reduction compared to analyzed detectors for MNIST dataset and up to 50x for CIFAR10 respectively.
arXiv Detail & Related papers (2020-10-18T22:00:11Z) - Non-Intrusive Detection of Adversarial Deep Learning Attacks via
Observer Networks [5.4572790062292125]
Recent studies have shown that deep learning models are vulnerable to crafted adversarial inputs.
We propose a novel method to detect adversarial inputs by augmenting the main classification network with multiple binary detectors.
We achieve a 99.5% detection accuracy on the MNIST dataset and 97.5% on the CIFAR-10 dataset.
arXiv Detail & Related papers (2020-02-22T21:13:00Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.