Essential Features: Reducing the Attack Surface of Adversarial
Perturbations with Robust Content-Aware Image Preprocessing
- URL: http://arxiv.org/abs/2012.01699v1
- Date: Thu, 3 Dec 2020 04:40:51 GMT
- Title: Essential Features: Reducing the Attack Surface of Adversarial
Perturbations with Robust Content-Aware Image Preprocessing
- Authors: Ryan Feng, Wu-chi Feng, Atul Prakash
- Abstract summary: Adversaries can fool machine learning models into making incorrect predictions by adding perturbations to an image.
One approach to defending against such perturbations is to apply image preprocessing functions to remove the effects of the perturbation.
We propose a novel image preprocessing technique called Essential Features that transforms the image into a robust feature space.
- Score: 5.831840281853604
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Adversaries are capable of adding perturbations to an image to fool machine
learning models into incorrect predictions. One approach to defending against
such perturbations is to apply image preprocessing functions to remove the
effects of the perturbation. Existing approaches tend to be designed
orthogonally to the content of the image and can be beaten by adaptive attacks.
We propose a novel image preprocessing technique called Essential Features that
transforms the image into a robust feature space that preserves the main
content of the image while significantly reducing the effects of the
perturbations. Specifically, an adaptive blurring strategy that preserves the
main edge features of the original object along with a k-means color reduction
approach is employed to simplify the image to its k most representative colors.
This approach significantly limits the attack surface for adversaries by
limiting the ability to adjust colors while preserving pertinent features of
the original image. We additionally design several adaptive attacks and find
that our approach remains more robust than previous baselines. On CIFAR-10 we
achieve 64% robustness and 58.13% robustness on RESISC45, raising robustness by
over 10% versus state-of-the-art adversarial training techniques against
adaptive white-box and black-box attacks. The results suggest that strategies
that retain essential features in images by adaptive processing of the content
hold promise as a complement to adversarial training for boosting robustness
against adversarial inputs.
Related papers
- Robust Network Learning via Inverse Scale Variational Sparsification [55.64935887249435]
We introduce an inverse scale variational sparsification framework within a time-continuous inverse scale space formulation.
Unlike frequency-based methods, our approach not only removes noise by smoothing small-scale features.
We show the efficacy of our approach through enhanced robustness against various noise types.
arXiv Detail & Related papers (2024-09-27T03:17:35Z) - ZePo: Zero-Shot Portrait Stylization with Faster Sampling [61.14140480095604]
This paper presents an inversion-free portrait stylization framework based on diffusion models that accomplishes content and style feature fusion in merely four sampling steps.
We propose a feature merging strategy to amalgamate redundant features in Consistency Features, thereby reducing the computational load of attention control.
arXiv Detail & Related papers (2024-08-10T08:53:41Z) - Towards Robust Image Stitching: An Adaptive Resistance Learning against
Compatible Attacks [66.98297584796391]
Image stitching seamlessly integrates images captured from varying perspectives into a single wide field-of-view image.
Given a pair of captured images, subtle perturbations and distortions which go unnoticed by the human visual system tend to attack the correspondence matching.
This paper presents the first attempt to improve the robustness of image stitching against adversarial attacks.
arXiv Detail & Related papers (2024-02-25T02:36:33Z) - IRAD: Implicit Representation-driven Image Resampling against Adversarial Attacks [16.577595936609665]
We introduce a novel approach to counter adversarial attacks, namely, image resampling.
Image resampling transforms a discrete image into a new one, simulating the process of scene recapturing or rerendering as specified by a geometrical transformation.
We show that our method significantly enhances the adversarial robustness of diverse deep models against various attacks while maintaining high accuracy on clean images.
arXiv Detail & Related papers (2023-10-18T11:19:32Z) - PAIF: Perception-Aware Infrared-Visible Image Fusion for Attack-Tolerant
Semantic Segmentation [50.556961575275345]
We propose a perception-aware fusion framework to promote segmentation robustness in adversarial scenes.
We show that our scheme substantially enhances the robustness, with gains of 15.3% mIOU, compared with advanced competitors.
arXiv Detail & Related papers (2023-08-08T01:55:44Z) - Content-based Unrestricted Adversarial Attack [53.181920529225906]
We propose a novel unrestricted attack framework called Content-based Unrestricted Adversarial Attack.
By leveraging a low-dimensional manifold that represents natural images, we map the images onto the manifold and optimize them along its adversarial direction.
arXiv Detail & Related papers (2023-05-18T02:57:43Z) - Reverse Engineering of Imperceptible Adversarial Image Perturbations [43.87341855153572]
We formalize the RED problem and identify a set of principles crucial to the RED approach design.
We propose a new Class-Discriminative Denoising based RED framework, termed CDD-RED.
arXiv Detail & Related papers (2022-03-26T19:52:40Z) - Adaptive Perturbation for Adversarial Attack [50.77612889697216]
We propose a new gradient-based attack method for adversarial examples.
We use the exact gradient direction with a scaling factor for generating adversarial perturbations.
Our method exhibits higher transferability and outperforms the state-of-the-art methods.
arXiv Detail & Related papers (2021-11-27T07:57:41Z) - Self-Supervised Iterative Contextual Smoothing for Efficient Adversarial
Defense against Gray- and Black-Box Attack [24.66829920826166]
We propose a novel input transformation based adversarial defense method against gray- and black-box attack.
Our defense is free of computationally expensive adversarial training, yet, can approach its robust accuracy via input transformation.
arXiv Detail & Related papers (2021-06-22T09:51:51Z) - Context-Aware Image Denoising with Auto-Threshold Canny Edge Detection
to Suppress Adversarial Perturbation [0.8021197489470756]
This paper presents a novel context-aware image denoising algorithm.
It combines an adaptive image smoothing technique and color reduction techniques to remove perturbation from adversarial images.
Our results show that the proposed approach reduces adversarial perturbation in adversarial attacks and increases the robustness of the deep convolutional neural network models.
arXiv Detail & Related papers (2021-01-14T19:15:28Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.