Training on Foveated Images Improves Robustness to Adversarial Attacks
- URL: http://arxiv.org/abs/2308.00854v1
- Date: Tue, 1 Aug 2023 21:40:30 GMT
- Title: Training on Foveated Images Improves Robustness to Adversarial Attacks
- Authors: Muhammad A. Shah and Bhiksha Raj
- Abstract summary: Deep neural networks (DNNs) have been shown to be vulnerable to adversarial attacks.
RBlur is an image transform that simulates the loss in fidelity of peripheral vision by blurring the image and reducing its color saturation.
DNNs trained on images transformed by RBlur are substantially more robust to adversarial attacks, as well as other, non-adversarial, corruptions, achieving up to 25% higher accuracy on perturbed data.
- Score: 26.472800216546233
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Deep neural networks (DNNs) have been shown to be vulnerable to adversarial
attacks -- subtle, perceptually indistinguishable perturbations of inputs that
change the response of the model. In the context of vision, we hypothesize that
an important contributor to the robustness of human visual perception is
constant exposure to low-fidelity visual stimuli in our peripheral vision. To
investigate this hypothesis, we develop \RBlur, an image transform that
simulates the loss in fidelity of peripheral vision by blurring the image and
reducing its color saturation based on the distance from a given fixation
point. We show that compared to DNNs trained on the original images, DNNs
trained on images transformed by \RBlur are substantially more robust to
adversarial attacks, as well as other, non-adversarial, corruptions, achieving
up to 25\% higher accuracy on perturbed data.
Related papers
- Leveraging the Human Ventral Visual Stream to Improve Neural Network Robustness [8.419105840498917]
Human object recognition exhibits remarkable resilience in cluttered and dynamic visual environments.
Despite their unparalleled performance across numerous visual tasks, Deep Neural Networks (DNNs) remain far less robust than humans.
Here we show that DNNs, when guided by neural representations from a hierarchical sequence of regions in the human ventral visual stream, display increasing robustness to adversarial attacks.
arXiv Detail & Related papers (2024-05-04T04:33:20Z) - Towards Robust Image Stitching: An Adaptive Resistance Learning against
Compatible Attacks [66.98297584796391]
Image stitching seamlessly integrates images captured from varying perspectives into a single wide field-of-view image.
Given a pair of captured images, subtle perturbations and distortions which go unnoticed by the human visual system tend to attack the correspondence matching.
This paper presents the first attempt to improve the robustness of image stitching against adversarial attacks.
arXiv Detail & Related papers (2024-02-25T02:36:33Z) - Dual Adversarial Resilience for Collaborating Robust Underwater Image
Enhancement and Perception [54.672052775549]
In this work, we introduce a collaborative adversarial resilience network, dubbed CARNet, for underwater image enhancement and subsequent detection tasks.
We propose a synchronized attack training strategy with both visual-driven and perception-driven attacks enabling the network to discern and remove various types of attacks.
Experiments demonstrate that the proposed method outputs visually appealing enhancement images and perform averagely 6.71% higher detection mAP than state-of-the-art methods.
arXiv Detail & Related papers (2023-09-03T06:52:05Z) - On Adversarial Robustness of Deep Image Deblurring [15.66170693813815]
This paper introduces adversarial attacks against deep learning-based image deblurring methods.
We demonstrate that imperceptible distortion can significantly degrade the performance of state-of-the-art deblurring networks.
arXiv Detail & Related papers (2022-10-05T18:31:33Z) - Human Eyes Inspired Recurrent Neural Networks are More Robust Against Adversarial Noises [7.689542442882423]
We designed a dual-stream vision model inspired by the human brain.
This model features retina-like input layers and includes two streams: one determining the next point of focus (the fixation), while the other interprets the visuals surrounding the fixation.
We evaluated this model against various benchmarks in terms of object recognition, gaze behavior and adversarial robustness.
arXiv Detail & Related papers (2022-06-15T03:44:42Z) - On the Robustness of Quality Measures for GANs [136.18799984346248]
This work evaluates the robustness of quality measures of generative models such as Inception Score (IS) and Fr'echet Inception Distance (FID)
We show that such metrics can also be manipulated by additive pixel perturbations.
arXiv Detail & Related papers (2022-01-31T06:43:09Z) - Error Diffusion Halftoning Against Adversarial Examples [85.11649974840758]
Adversarial examples contain carefully crafted perturbations that can fool deep neural networks into making wrong predictions.
We propose a new image transformation defense based on error diffusion halftoning, and combine it with adversarial training to defend against adversarial examples.
arXiv Detail & Related papers (2021-01-23T07:55:02Z) - Stereopagnosia: Fooling Stereo Networks with Adversarial Perturbations [71.00754846434744]
We show that imperceptible additive perturbations can significantly alter the disparity map.
We show that, when used for adversarial data augmentation, our perturbations result in trained models that are more robust.
arXiv Detail & Related papers (2020-09-21T19:20:09Z) - Towards Achieving Adversarial Robustness by Enforcing Feature
Consistency Across Bit Planes [51.31334977346847]
We train networks to form coarse impressions based on the information in higher bit planes, and use the lower bit planes only to refine their prediction.
We demonstrate that, by imposing consistency on the representations learned across differently quantized images, the adversarial robustness of networks improves significantly.
arXiv Detail & Related papers (2020-04-01T09:31:10Z) - Adversarial Attacks on Convolutional Neural Networks in Facial
Recognition Domain [2.4704085162861693]
Adversarial attacks that render Deep Neural Network (DNN) classifiers vulnerable in real life represent a serious threat in autonomous vehicles, malware filters, or biometric authentication systems.
We apply Fast Gradient Sign Method to introduce perturbations to a facial image dataset and then test the output on a different classifier.
We craft a variety of different black-box attack algorithms on a facial image dataset assuming minimal adversarial knowledge.
arXiv Detail & Related papers (2020-01-30T00:25:05Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.