GraCIAS: Grassmannian of Corrupted Images for Adversarial Security
- URL: http://arxiv.org/abs/2005.02936v2
- Date: Thu, 7 May 2020 15:11:24 GMT
- Title: GraCIAS: Grassmannian of Corrupted Images for Adversarial Security
- Authors: Ankita Shukla, Pavan Turaga and Saket Anand
- Abstract summary: In this work, we propose a defense strategy that applies random image corruptions to the input image alone.
We develop proximity relationships between the projection operator of a clean image and of its adversarially perturbed version, via bounds relating geodesic distance on the Grassmannian to matrix Frobenius norms.
Unlike state-of-the-art approaches, even without any retraining, the proposed strategy achieves an absolute improvement of 4.5% in defense accuracy on ImageNet.
- Score: 4.259219671110274
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Input transformation based defense strategies fall short in defending against
strong adversarial attacks. Some successful defenses adopt approaches that
either increase the randomness within the applied transformations, or make the
defense computationally intensive, making it substantially more challenging for
the attacker. However, it limits the applicability of such defenses as a
pre-processing step, similar to computationally heavy approaches that use
retraining and network modifications to achieve robustness to perturbations. In
this work, we propose a defense strategy that applies random image corruptions
to the input image alone, constructs a self-correlation based subspace followed
by a projection operation to suppress the adversarial perturbation. Due to its
simplicity, the proposed defense is computationally efficient as compared to
the state-of-the-art, and yet can withstand huge perturbations. Further, we
develop proximity relationships between the projection operator of a clean
image and of its adversarially perturbed version, via bounds relating geodesic
distance on the Grassmannian to matrix Frobenius norms. We empirically show
that our strategy is complementary to other weak defenses like JPEG compression
and can be seamlessly integrated with them to create a stronger defense. We
present extensive experiments on the ImageNet dataset across four different
models namely InceptionV3, ResNet50, VGG16 and MobileNet models with
perturbation magnitude set to {\epsilon} = 16. Unlike state-of-the-art
approaches, even without any retraining, the proposed strategy achieves an
absolute improvement of ~ 4.5% in defense accuracy on ImageNet.
Related papers
- MirrorCheck: Efficient Adversarial Defense for Vision-Language Models [55.73581212134293]
We propose a novel, yet elegantly simple approach for detecting adversarial samples in Vision-Language Models.
Our method leverages Text-to-Image (T2I) models to generate images based on captions produced by target VLMs.
Empirical evaluations conducted on different datasets validate the efficacy of our approach.
arXiv Detail & Related papers (2024-06-13T15:55:04Z) - Meta Invariance Defense Towards Generalizable Robustness to Unknown Adversarial Attacks [62.036798488144306]
Current defense mainly focuses on the known attacks, but the adversarial robustness to the unknown attacks is seriously overlooked.
We propose an attack-agnostic defense method named Meta Invariance Defense (MID)
We show that MID simultaneously achieves robustness to the imperceptible adversarial perturbations in high-level image classification and attack-suppression in low-level robust image regeneration.
arXiv Detail & Related papers (2024-04-04T10:10:38Z) - Anomaly Unveiled: Securing Image Classification against Adversarial
Patch Attacks [3.6275442368775512]
Adversarial patch attacks pose a significant threat to the practical deployment of deep learning systems.
In this paper, we investigate the behavior of adversarial patches as anomalies within the distribution of image information.
Our proposed defense mechanism utilizes a clustering-based technique called DBSCAN to isolate anomalous image segments.
arXiv Detail & Related papers (2024-02-09T08:52:47Z) - IRAD: Implicit Representation-driven Image Resampling against Adversarial Attacks [16.577595936609665]
We introduce a novel approach to counter adversarial attacks, namely, image resampling.
Image resampling transforms a discrete image into a new one, simulating the process of scene recapturing or rerendering as specified by a geometrical transformation.
We show that our method significantly enhances the adversarial robustness of diverse deep models against various attacks while maintaining high accuracy on clean images.
arXiv Detail & Related papers (2023-10-18T11:19:32Z) - Scale-free Photo-realistic Adversarial Pattern Attack [20.818415741759512]
Generative Adversarial Networks (GAN) can partially address this problem by synthesizing a more semantically meaningful texture pattern.
In this paper, we propose a scale-free generation-based attack algorithm that synthesizes semantically meaningful adversarial patterns globally to images with arbitrary scales.
arXiv Detail & Related papers (2022-08-12T11:25:39Z) - Error Diffusion Halftoning Against Adversarial Examples [85.11649974840758]
Adversarial examples contain carefully crafted perturbations that can fool deep neural networks into making wrong predictions.
We propose a new image transformation defense based on error diffusion halftoning, and combine it with adversarial training to defend against adversarial examples.
arXiv Detail & Related papers (2021-01-23T07:55:02Z) - Attack Agnostic Adversarial Defense via Visual Imperceptible Bound [70.72413095698961]
This research aims to design a defense model that is robust within a certain bound against both seen and unseen adversarial attacks.
The proposed defense model is evaluated on the MNIST, CIFAR-10, and Tiny ImageNet databases.
The proposed algorithm is attack agnostic, i.e. it does not require any knowledge of the attack algorithm.
arXiv Detail & Related papers (2020-10-25T23:14:26Z) - Online Alternate Generator against Adversarial Attacks [144.45529828523408]
Deep learning models are notoriously sensitive to adversarial examples which are synthesized by adding quasi-perceptible noises on real images.
We propose a portable defense method, online alternate generator, which does not need to access or modify the parameters of the target networks.
The proposed method works by online synthesizing another image from scratch for an input image, instead of removing or destroying adversarial noises.
arXiv Detail & Related papers (2020-09-17T07:11:16Z) - A Self-supervised Approach for Adversarial Robustness [105.88250594033053]
Adversarial examples can cause catastrophic mistakes in Deep Neural Network (DNNs) based vision systems.
This paper proposes a self-supervised adversarial training mechanism in the input space.
It provides significant robustness against the textbfunseen adversarial attacks.
arXiv Detail & Related papers (2020-06-08T20:42:39Z) - SAD: Saliency-based Defenses Against Adversarial Examples [0.9786690381850356]
adversarial examples drift model predictions away from the original intent of the network.
In this work, we propose a visual saliency based approach to cleaning data affected by an adversarial attack.
arXiv Detail & Related papers (2020-03-10T15:55:23Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.