Towards Frequency-Based Explanation for Robust CNN
- URL: http://arxiv.org/abs/2005.03141v1
- Date: Wed, 6 May 2020 21:22:35 GMT
- Title: Towards Frequency-Based Explanation for Robust CNN
- Authors: Zifan Wang, Yilin Yang, Ankit Shrivastava, Varun Rawal and Zihao Ding
- Abstract summary: We present an analysis of the connection between the distribution of frequency components in the input dataset and the reasoning process the model learns from the data.
We show that the vulnerability of the model against tiny distortions is a result of the model is relying on the high-frequency features.
- Score: 6.164771707307929
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Current explanation techniques towards a transparent Convolutional Neural
Network (CNN) mainly focuses on building connections between the
human-understandable input features with models' prediction, overlooking an
alternative representation of the input, the frequency components
decomposition. In this work, we present an analysis of the connection between
the distribution of frequency components in the input dataset and the reasoning
process the model learns from the data. We further provide quantification
analysis about the contribution of different frequency components toward the
model's prediction. We show that the vulnerability of the model against tiny
distortions is a result of the model is relying on the high-frequency features,
the target features of the adversarial (black and white-box) attackers, to make
the prediction. We further show that if the model develops stronger association
between the low-frequency component with true labels, the model is more robust,
which is the explanation of why adversarially trained models are more robust
against tiny distortions.
Related papers
- Towards Building More Robust Models with Frequency Bias [8.510441741759758]
This paper presents a plug-and-play module that adaptively reconfigures the low- and high-frequency components of intermediate feature representations.
Empirical studies show that our proposed module can be easily incorporated into any adversarial training framework.
arXiv Detail & Related papers (2023-07-19T05:46:56Z) - Causal Analysis for Robust Interpretability of Neural Networks [0.2519906683279152]
We develop a robust interventional-based method to capture cause-effect mechanisms in pre-trained neural networks.
We apply our method to vision models trained on classification tasks.
arXiv Detail & Related papers (2023-05-15T18:37:24Z) - How Does Frequency Bias Affect the Robustness of Neural Image
Classifiers against Common Corruption and Adversarial Perturbations? [27.865987936475797]
Recent studies have shown that data augmentation can result in model over-relying on features in the low-frequency domain.
We propose Jacobian frequency regularization for models' Jacobians to have a larger ratio of low-frequency components.
Our approach elucidates a more direct connection between the frequency bias and robustness of deep learning models.
arXiv Detail & Related papers (2022-05-09T20:09:31Z) - From Environmental Sound Representation to Robustness of 2D CNN Models
Against Adversarial Attacks [82.21746840893658]
This paper investigates the impact of different standard environmental sound representations (spectrograms) on the recognition performance and adversarial attack robustness of a victim residual convolutional neural network.
We show that while the ResNet-18 model trained on DWT spectrograms achieves a high recognition accuracy, attacking this model is relatively more costly for the adversary.
arXiv Detail & Related papers (2022-04-14T15:14:08Z) - On the benefits of robust models in modulation recognition [53.391095789289736]
Deep Neural Networks (DNNs) using convolutional layers are state-of-the-art in many tasks in communications.
In other domains, like image classification, DNNs have been shown to be vulnerable to adversarial perturbations.
We propose a novel framework to test the robustness of current state-of-the-art models.
arXiv Detail & Related papers (2021-03-27T19:58:06Z) - Non-Singular Adversarial Robustness of Neural Networks [58.731070632586594]
Adrial robustness has become an emerging challenge for neural network owing to its over-sensitivity to small input perturbations.
We formalize the notion of non-singular adversarial robustness for neural networks through the lens of joint perturbations to data inputs as well as model weights.
arXiv Detail & Related papers (2021-02-23T20:59:30Z) - Firearm Detection via Convolutional Neural Networks: Comparing a
Semantic Segmentation Model Against End-to-End Solutions [68.8204255655161]
Threat detection of weapons and aggressive behavior from live video can be used for rapid detection and prevention of potentially deadly incidents.
One way for achieving this is through the use of artificial intelligence and, in particular, machine learning for image analysis.
We compare a traditional monolithic end-to-end deep learning model and a previously proposed model based on an ensemble of simpler neural networks detecting fire-weapons via semantic segmentation.
arXiv Detail & Related papers (2020-12-17T15:19:29Z) - Explaining and Improving Model Behavior with k Nearest Neighbor
Representations [107.24850861390196]
We propose using k nearest neighbor representations to identify training examples responsible for a model's predictions.
We show that kNN representations are effective at uncovering learned spurious associations.
Our results indicate that the kNN approach makes the finetuned model more robust to adversarial inputs.
arXiv Detail & Related papers (2020-10-18T16:55:25Z) - Understanding Neural Abstractive Summarization Models via Uncertainty [54.37665950633147]
seq2seq abstractive summarization models generate text in a free-form manner.
We study the entropy, or uncertainty, of the model's token-level predictions.
We show that uncertainty is a useful perspective for analyzing summarization and text generation models more broadly.
arXiv Detail & Related papers (2020-10-15T16:57:27Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.