Natural Color Fool: Towards Boosting Black-box Unrestricted Attacks
- URL: http://arxiv.org/abs/2210.02041v1
- Date: Wed, 5 Oct 2022 06:24:16 GMT
- Title: Natural Color Fool: Towards Boosting Black-box Unrestricted Attacks
- Authors: Shengming Yuan, Qilong Zhang, Lianli Gao, Yaya Cheng, Jingkuan Song
- Abstract summary: We propose a novel Natural Color Fool (NCF) to boost transferability of adversarial examples without damaging image quality.
Results show that our NCF can outperform state-of-the-art approaches by 15.0%$sim$32.9% for fooling normally trained models and 10.0%$sim$25.3% for evading defense methods.
- Score: 68.48271396073156
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Unrestricted color attacks, which manipulate semantically meaningful color of
an image, have shown their stealthiness and success in fooling both human eyes
and deep neural networks. However, current works usually sacrifice the
flexibility of the uncontrolled setting to ensure the naturalness of
adversarial examples. As a result, the black-box attack performance of these
methods is limited. To boost transferability of adversarial examples without
damaging image quality, we propose a novel Natural Color Fool (NCF) which is
guided by realistic color distributions sampled from a publicly available
dataset and optimized by our neighborhood search and initialization reset. By
conducting extensive experiments and visualizations, we convincingly
demonstrate the effectiveness of our proposed method. Notably, on average,
results show that our NCF can outperform state-of-the-art approaches by
15.0%$\sim$32.9% for fooling normally trained models and 10.0%$\sim$25.3% for
evading defense methods. Our code is available at
https://github.com/ylhz/Natural-Color-Fool.
Related papers
- CNCA: Toward Customizable and Natural Generation of Adversarial Camouflage for Vehicle Detectors [19.334642862951537]
We propose a Customizable and Natural Camouflage Attack (CNCA) method by leveraging an off-the-shelf pre-trained diffusion model.
Our method can generate natural and customizable adversarial camouflage while maintaining high attack performance.
arXiv Detail & Related papers (2024-09-26T15:41:18Z) - Breaking Free: How to Hack Safety Guardrails in Black-Box Diffusion Models! [52.0855711767075]
EvoSeed is an evolutionary strategy-based algorithmic framework for generating photo-realistic natural adversarial samples.
We employ CMA-ES to optimize the search for an initial seed vector, which, when processed by the Conditional Diffusion Model, results in the natural adversarial sample misclassified by the Model.
Experiments show that generated adversarial images are of high image quality, raising concerns about generating harmful content bypassing safety classifiers.
arXiv Detail & Related papers (2024-02-07T09:39:29Z) - Incorporating Ensemble and Transfer Learning For An End-To-End
Auto-Colorized Image Detection Model [0.0]
This paper presents a novel approach that combines the advantages of transfer and ensemble learning approaches to help reduce training time and resource requirements.
The proposed model shows promising results, with accuracy ranging from 94.55% to 99.13%.
arXiv Detail & Related papers (2023-09-25T19:22:57Z) - Diffusion-Based Adversarial Sample Generation for Improved Stealthiness
and Controllability [62.105715985563656]
We propose a novel framework dubbed Diffusion-Based Projected Gradient Descent (Diff-PGD) for generating realistic adversarial samples.
Our framework can be easily customized for specific tasks such as digital attacks, physical-world attacks, and style-based attacks.
arXiv Detail & Related papers (2023-05-25T21:51:23Z) - Content-based Unrestricted Adversarial Attack [53.181920529225906]
We propose a novel unrestricted attack framework called Content-based Unrestricted Adversarial Attack.
By leveraging a low-dimensional manifold that represents natural images, we map the images onto the manifold and optimize them along its adversarial direction.
arXiv Detail & Related papers (2023-05-18T02:57:43Z) - Patch-wise Attack for Fooling Deep Neural Network [153.59832333877543]
We propose a patch-wise iterative algorithm -- a black-box attack towards mainstream normally trained and defense models.
We significantly improve the success rate by 9.2% for defense models and 3.7% for normally trained models on average.
arXiv Detail & Related papers (2020-07-14T01:50:22Z) - Creating Artificial Modalities to Solve RGB Liveness [79.9255035557979]
We introduce two types of artificial transforms: rank pooling and optical flow, combined in end-to-end pipeline for spoof detection.
The proposed method achieves state-of-the-art on the largest cross-ethnicity face anti-spoofing dataset CASIA-SURF CeFA (RGB)
arXiv Detail & Related papers (2020-06-29T13:19:22Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.