Adversarial Image Color Transformations in Explicit Color Filter Space
- URL: http://arxiv.org/abs/2011.06690v3
- Date: Fri, 16 Jun 2023 10:19:13 GMT
- Title: Adversarial Image Color Transformations in Explicit Color Filter Space
- Authors: Zhengyu Zhao and Zhuoran Liu and Martha Larson
- Abstract summary: Adversarial Color Filter (AdvCF) is a novel color transformation attack that is optimized with gradient information in the parameter space of a simple color filter.
We show that AdvCF is superior over the state-of-the-art human-interpretable color transformation attack on both image acceptability and efficiency.
- Score: 5.682107851677069
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: Deep Neural Networks have been shown to be vulnerable to adversarial images.
Conventional attacks strive for indistinguishable adversarial images with
strictly restricted perturbations. Recently, researchers have moved to explore
distinguishable yet non-suspicious adversarial images and demonstrated that
color transformation attacks are effective. In this work, we propose
Adversarial Color Filter (AdvCF), a novel color transformation attack that is
optimized with gradient information in the parameter space of a simple color
filter. In particular, our color filter space is explicitly specified so that
we are able to provide a systematic analysis of model robustness against
adversarial color transformations, from both the attack and defense
perspectives. In contrast, existing color transformation attacks do not offer
the opportunity for systematic analysis due to the lack of such an explicit
space. We further demonstrate the effectiveness of our AdvCF in fooling image
classifiers and also compare it with other color transformation attacks
regarding their robustness to defenses and image acceptability through an
extensive user study. We also highlight the human-interpretability of AdvCF and
show its superiority over the state-of-the-art human-interpretable color
transformation attack on both image acceptability and efficiency. Additional
results provide interesting new insights into model robustness against AdvCF in
another three visual tasks.
Related papers
- Transform-Dependent Adversarial Attacks [15.374381635334897]
We introduce transform-dependent adversarial attacks on deep networks.
Our perturbations exhibit metamorphic properties, enabling diverse adversarial effects as a function of transformation parameters.
We show that transform-dependent perturbations achieve high targeted attack success rates, outperforming state-of-the-art transfer attacks by 17-31% in blackbox scenarios.
arXiv Detail & Related papers (2024-06-12T17:31:36Z) - TranSegPGD: Improving Transferability of Adversarial Examples on
Semantic Segmentation [62.954089681629206]
We propose an effective two-stage adversarial attack strategy to improve the transferability of adversarial examples on semantic segmentation.
The proposed adversarial attack method can achieve state-of-the-art performance.
arXiv Detail & Related papers (2023-12-03T00:48:33Z) - Color Equivariant Convolutional Networks [50.655443383582124]
CNNs struggle if there is data imbalance between color variations introduced by accidental recording conditions.
We propose Color Equivariant Convolutions ( CEConvs), a novel deep learning building block that enables shape feature sharing across the color spectrum.
We demonstrate the benefits of CEConvs in terms of downstream performance to various tasks and improved robustness to color changes, including train-test distribution shifts.
arXiv Detail & Related papers (2023-10-30T09:18:49Z) - IRAD: Implicit Representation-driven Image Resampling against Adversarial Attacks [16.577595936609665]
We introduce a novel approach to counter adversarial attacks, namely, image resampling.
Image resampling transforms a discrete image into a new one, simulating the process of scene recapturing or rerendering as specified by a geometrical transformation.
We show that our method significantly enhances the adversarial robustness of diverse deep models against various attacks while maintaining high accuracy on clean images.
arXiv Detail & Related papers (2023-10-18T11:19:32Z) - Content-based Unrestricted Adversarial Attack [53.181920529225906]
We propose a novel unrestricted attack framework called Content-based Unrestricted Adversarial Attack.
By leveraging a low-dimensional manifold that represents natural images, we map the images onto the manifold and optimize them along its adversarial direction.
arXiv Detail & Related papers (2023-05-18T02:57:43Z) - Cross-Modal Transferable Adversarial Attacks from Images to Videos [82.0745476838865]
Recent studies have shown that adversarial examples hand-crafted on one white-box model can be used to attack other black-box models.
We propose a simple yet effective cross-modal attack method, named as Image To Video (I2V) attack.
I2V generates adversarial frames by minimizing the cosine similarity between features of pre-trained image models from adversarial and benign examples.
arXiv Detail & Related papers (2021-12-10T08:19:03Z) - Error Diffusion Halftoning Against Adversarial Examples [85.11649974840758]
Adversarial examples contain carefully crafted perturbations that can fool deep neural networks into making wrong predictions.
We propose a new image transformation defense based on error diffusion halftoning, and combine it with adversarial training to defend against adversarial examples.
arXiv Detail & Related papers (2021-01-23T07:55:02Z) - Perception Improvement for Free: Exploring Imperceptible Black-box
Adversarial Attacks on Image Classification [27.23874129994179]
White-box adversarial attacks can fool neural networks with small perturbations, especially for large size images.
Keeping successful adversarial perturbations imperceptible is especially challenging for transfer-based black-box adversarial attacks.
We propose structure-aware adversarial attacks by generating adversarial images based on psychological perceptual models.
arXiv Detail & Related papers (2020-10-30T07:17:12Z) - Creating Artificial Modalities to Solve RGB Liveness [79.9255035557979]
We introduce two types of artificial transforms: rank pooling and optical flow, combined in end-to-end pipeline for spoof detection.
The proposed method achieves state-of-the-art on the largest cross-ethnicity face anti-spoofing dataset CASIA-SURF CeFA (RGB)
arXiv Detail & Related papers (2020-06-29T13:19:22Z) - Towards Feature Space Adversarial Attack [18.874224858723494]
We propose a new adversarial attack to Deep Neural Networks for image classification.
Our attack focuses on perturbing abstract features, more specifically, features that denote styles.
We show that our attack can generate adversarial samples that are more natural-looking than the state-of-the-art attacks.
arXiv Detail & Related papers (2020-04-26T13:56:31Z) - Adversarial Perturbations Prevail in the Y-Channel of the YCbCr Color
Space [43.49959098842923]
In a white-box attack, adversarial perturbations are generally learned for deep models that operate on RGB images.
In this paper, we show that the adversarial perturbations prevail in the Y-channel of the YCbCr space.
Based on our finding, we propose a defense against adversarial images.
arXiv Detail & Related papers (2020-02-25T02:41:42Z) - Adversarial Color Enhancement: Generating Unrestricted Adversarial
Images by Optimizing a Color Filter [5.682107851677069]
We introduce an approach that enhances images using a color filter in order to create adversarial effects, which fool neural networks into misclassification.
Our approach, Adversarial Color Enhancement (ACE), generates unrestricted adversarial images by optimizing the color filter via gradient descent.
arXiv Detail & Related papers (2020-02-03T20:44:29Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.