Robustness of Deep Neural Networks for Micro-Doppler Radar
Classification
- URL: http://arxiv.org/abs/2402.13651v2
- Date: Thu, 22 Feb 2024 07:22:51 GMT
- Title: Robustness of Deep Neural Networks for Micro-Doppler Radar
Classification
- Authors: Mikolaj Czerkawski and Carmine Clemente and Craig Michie and Christos
Tachtatzis
- Abstract summary: Two deep convolutional architectures, trained and tested on the same data, are evaluated.
Models are susceptible to adversarial examples.
cadence-velocity diagram representation rather than Doppler-time are demonstrated to be naturally more immune to adversarial examples.
- Score: 1.3654846342364308
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: With the great capabilities of deep classifiers for radar data processing
come the risks of learning dataset-specific features that do not generalize
well. In this work, the robustness of two deep convolutional architectures,
trained and tested on the same data, is evaluated. When standard training
practice is followed, both classifiers exhibit sensitivity to subtle temporal
shifts of the input representation, an augmentation that carries minimal
semantic content. Furthermore, the models are extremely susceptible to
adversarial examples. Both small temporal shifts and adversarial examples are a
result of a model overfitting on features that do not generalize well. As a
remedy, it is shown that training on adversarial examples and temporally
augmented samples can reduce this effect and lead to models that generalise
better. Finally, models operating on cadence-velocity diagram representation
rather than Doppler-time are demonstrated to be naturally more immune to
adversarial examples.
Related papers
- Improving Network Interpretability via Explanation Consistency Evaluation [56.14036428778861]
We propose a framework that acquires more explainable activation heatmaps and simultaneously increase the model performance.
Specifically, our framework introduces a new metric, i.e., explanation consistency, to reweight the training samples adaptively in model learning.
Our framework then promotes the model learning by paying closer attention to those training samples with a high difference in explanations.
arXiv Detail & Related papers (2024-08-08T17:20:08Z) - The Surprising Harmfulness of Benign Overfitting for Adversarial
Robustness [13.120373493503772]
We prove a surprising result that even if the ground truth itself is robust to adversarial examples, the benignly overfitted model is benign in terms of the standard'' out-of-sample risk objective.
Our finding provides theoretical insights into the puzzling phenomenon observed in practice, where the true target function (e.g., human) is robust against adverasrial attack, while beginly overfitted neural networks lead to models that are not robust.
arXiv Detail & Related papers (2024-01-19T15:40:46Z) - Data Attribution for Diffusion Models: Timestep-induced Bias in Influence Estimation [53.27596811146316]
Diffusion models operate over a sequence of timesteps instead of instantaneous input-output relationships in previous contexts.
We present Diffusion-TracIn that incorporates this temporal dynamics and observe that samples' loss gradient norms are highly dependent on timestep.
We introduce Diffusion-ReTrac as a re-normalized adaptation that enables the retrieval of training samples more targeted to the test sample of interest.
arXiv Detail & Related papers (2024-01-17T07:58:18Z) - A Frequency Perspective of Adversarial Robustness [72.48178241090149]
We present a frequency-based understanding of adversarial examples, supported by theoretical and empirical findings.
Our analysis shows that adversarial examples are neither in high-frequency nor in low-frequency components, but are simply dataset dependent.
We propose a frequency-based explanation for the commonly observed accuracy vs. robustness trade-off.
arXiv Detail & Related papers (2021-10-26T19:12:34Z) - Classification and Adversarial examples in an Overparameterized Linear
Model: A Signal Processing Perspective [10.515544361834241]
State-of-the-art deep learning classifiers are highly susceptible to infinitesmal adversarial perturbations.
We find that the learned model is susceptible to adversaries in an intermediate regime where classification generalizes but regression does not.
Despite the adversarial susceptibility, we find that classification with these features can be easier than the more commonly studied "independent feature" models.
arXiv Detail & Related papers (2021-09-27T17:35:42Z) - Explainable Adversarial Attacks in Deep Neural Networks Using Activation
Profiles [69.9674326582747]
This paper presents a visual framework to investigate neural network models subjected to adversarial examples.
We show how observing these elements can quickly pinpoint exploited areas in a model.
arXiv Detail & Related papers (2021-03-18T13:04:21Z) - Hard-label Manifolds: Unexpected Advantages of Query Efficiency for
Finding On-manifold Adversarial Examples [67.23103682776049]
Recent zeroth order hard-label attacks on image classification models have shown comparable performance to their first-order, gradient-level alternatives.
It was recently shown in the gradient-level setting that regular adversarial examples leave the data manifold, while their on-manifold counterparts are in fact generalization errors.
We propose an information-theoretic argument based on a noisy manifold distance oracle, which leaks manifold information through the adversary's gradient estimate.
arXiv Detail & Related papers (2021-03-04T20:53:06Z) - Robust and On-the-fly Dataset Denoising for Image Classification [72.10311040730815]
On-the-fly Data Denoising (ODD) is robust to mislabeled examples, while introducing almost zero computational overhead compared to standard training.
ODD is able to achieve state-of-the-art results on a wide range of datasets including real-world ones such as WebVision and Clothing1M.
arXiv Detail & Related papers (2020-03-24T03:59:26Z) - A Bayes-Optimal View on Adversarial Examples [9.51828574518325]
We argue for examining adversarial examples from the perspective of Bayes-optimal classification.
Our results show that even when these "gold standard" optimal classifiers are robust, CNNs trained on the same datasets consistently learn a vulnerable classifier.
arXiv Detail & Related papers (2020-02-20T16:43:47Z) - On the Similarity of Deep Learning Representations Across Didactic and Adversarial Examples [0.0]
Adrial examples in the wild may inadvertently prove deleterious for accurate predictive modeling.
We show that representational similarity and performance vary according to the frequency of adversarial examples in the input space.
arXiv Detail & Related papers (2020-02-17T07:49:20Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.