Towards Imperceptible Query-limited Adversarial Attacks with Perceptual
Feature Fidelity Loss
- URL: http://arxiv.org/abs/2102.00449v1
- Date: Sun, 31 Jan 2021 13:32:55 GMT
- Title: Towards Imperceptible Query-limited Adversarial Attacks with Perceptual
Feature Fidelity Loss
- Authors: Pengrui Quan, Ruiming Guo, Mani Srivastava
- Abstract summary: In this work, we propose a novel perceptual metric utilizing the well-established connection between the low-level image feature fidelity and human visual sensitivity.
We show that our metric can robustly reflect and describe the imperceptibility of the generated adversarial images validated in various conditions.
- Score: 3.351714665243138
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Recently, there has been a large amount of work towards fooling
deep-learning-based classifiers, particularly for images, via adversarial
inputs that are visually similar to the benign examples. However, researchers
usually use Lp-norm minimization as a proxy for imperceptibility, which
oversimplifies the diversity and richness of real-world images and human visual
perception. In this work, we propose a novel perceptual metric utilizing the
well-established connection between the low-level image feature fidelity and
human visual sensitivity, where we call it Perceptual Feature Fidelity Loss. We
show that our metric can robustly reflect and describe the imperceptibility of
the generated adversarial images validated in various conditions. Moreover, we
demonstrate that this metric is highly flexible, which can be conveniently
integrated into different existing optimization frameworks to guide the noise
distribution for better imperceptibility. The metric is particularly useful in
the challenging black-box attack with limited queries, where the
imperceptibility is hard to achieve due to the non-trivial perturbation power.
Related papers
- Transcending Adversarial Perturbations: Manifold-Aided Adversarial
Examples with Legitimate Semantics [10.058463432437659]
Deep neural networks were significantly vulnerable to adversarial examples manipulated by malicious tiny perturbations.
In this paper, we propose a supervised semantic-transformation generative model to generate adversarial examples with real and legitimate semantics.
Experiments on MNIST and industrial defect datasets showed that our adversarial examples not only exhibited better visual quality but also achieved superior attack transferability.
arXiv Detail & Related papers (2024-02-05T15:25:40Z) - When Measures are Unreliable: Imperceptible Adversarial Perturbations
toward Top-$k$ Multi-Label Learning [83.8758881342346]
A novel loss function is devised to generate adversarial perturbations that could achieve both visual and measure imperceptibility.
Experiments on large-scale benchmark datasets demonstrate the superiority of our proposed method in attacking the top-$k$ multi-label systems.
arXiv Detail & Related papers (2023-07-27T13:18:47Z) - Content-based Unrestricted Adversarial Attack [53.181920529225906]
We propose a novel unrestricted attack framework called Content-based Unrestricted Adversarial Attack.
By leveraging a low-dimensional manifold that represents natural images, we map the images onto the manifold and optimize them along its adversarial direction.
arXiv Detail & Related papers (2023-05-18T02:57:43Z) - On the Robustness of Quality Measures for GANs [136.18799984346248]
This work evaluates the robustness of quality measures of generative models such as Inception Score (IS) and Fr'echet Inception Distance (FID)
We show that such metrics can also be manipulated by additive pixel perturbations.
arXiv Detail & Related papers (2022-01-31T06:43:09Z) - Robust Contrastive Learning against Noisy Views [79.71880076439297]
We propose a new contrastive loss function that is robust against noisy views.
We show that our approach provides consistent improvements over the state-of-the-art image, video, and graph contrastive learning benchmarks.
arXiv Detail & Related papers (2022-01-12T05:24:29Z) - Deblurring via Stochastic Refinement [85.42730934561101]
We present an alternative framework for blind deblurring based on conditional diffusion models.
Our method is competitive in terms of distortion metrics such as PSNR.
arXiv Detail & Related papers (2021-12-05T04:36:09Z) - Inverting Adversarially Robust Networks for Image Synthesis [37.927552662984034]
We propose the use of robust representations as a perceptual primitive for feature inversion models.
We empirically show that adopting robust representations as an image prior significantly improves the reconstruction accuracy of CNN-based feature inversion models.
Following these findings, we propose an encoding-decoding network based on robust representations and show its advantages for applications such as anomaly detection, style transfer and image denoising.
arXiv Detail & Related papers (2021-06-13T05:51:00Z) - Perceptually Constrained Adversarial Attacks [2.0305676256390934]
We replace the usually applied $L_p$ norms with the structural similarity index (SSIM) measure.
Our SSIM-constrained adversarial attacks can break state-of-the-art adversarially trained classifiers and achieve similar or larger success rate than the elastic net attack.
We evaluate the performance of several defense schemes in a perceptually much more meaningful way than was done previously in the literature.
arXiv Detail & Related papers (2021-02-14T12:28:51Z) - Proactive Pseudo-Intervention: Causally Informed Contrastive Learning
For Interpretable Vision Models [103.64435911083432]
We present a novel contrastive learning strategy called it Proactive Pseudo-Intervention (PPI)
PPI leverages proactive interventions to guard against image features with no causal relevance.
We also devise a novel causally informed salience mapping module to identify key image pixels to intervene, and show it greatly facilitates model interpretability.
arXiv Detail & Related papers (2020-12-06T20:30:26Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.