AS-FIBA: Adaptive Selective Frequency-Injection for Backdoor Attack on
Deep Face Restoration
- URL: http://arxiv.org/abs/2403.06430v1
- Date: Mon, 11 Mar 2024 04:44:26 GMT
- Title: AS-FIBA: Adaptive Selective Frequency-Injection for Backdoor Attack on
Deep Face Restoration
- Authors: Zhenbo Song, Wenhao Gao, Kaihao Zhang, Wenhan Luo, Zhaoxin Fan,
Jianfeng Lu
- Abstract summary: Deep learning-based face restoration models have become targets for sophisticated backdoor attacks.
We introduce a unique degradation objective tailored for attacking restoration models.
We propose the Adaptive Selective Frequency Injection Backdoor Attack (AS-FIBA) framework.
- Score: 43.953370132140904
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Deep learning-based face restoration models, increasingly prevalent in smart
devices, have become targets for sophisticated backdoor attacks. These attacks,
through subtle trigger injection into input face images, can lead to unexpected
restoration outcomes. Unlike conventional methods focused on classification
tasks, our approach introduces a unique degradation objective tailored for
attacking restoration models. Moreover, we propose the Adaptive Selective
Frequency Injection Backdoor Attack (AS-FIBA) framework, employing a neural
network for input-specific trigger generation in the frequency domain,
seamlessly blending triggers with benign images. This results in imperceptible
yet effective attacks, guiding restoration predictions towards subtly degraded
outputs rather than conspicuous targets. Extensive experiments demonstrate the
efficacy of the degradation objective on state-of-the-art face restoration
models. Additionally, it is notable that AS-FIBA can insert effective backdoors
that are more imperceptible than existing backdoor attack methods, including
WaNet, ISSBA, and FIBA.
Related papers
- Defending Multimodal Backdoored Models by Repulsive Visual Prompt Tuning [13.802845998402677]
Multimodal contrastive learning models (e.g., CLIP) can learn high-quality representations from large-scale image-text datasets.
They exhibit significant vulnerabilities to backdoor attacks, raising serious safety concerns.
We propose Repulsive Visual Prompt Tuning (RVPT) as a novel defense approach.
arXiv Detail & Related papers (2024-12-29T08:09:20Z) - Backdoor Attacks against No-Reference Image Quality Assessment Models via a Scalable Trigger [76.36315347198195]
No-Reference Image Quality Assessment (NR-IQA) plays a critical role in evaluating and optimizing computer vision systems.
Recent research indicates that NR-IQA models are susceptible to adversarial attacks.
We present a novel poisoning-based backdoor attack against NR-IQA (BAIQA)
arXiv Detail & Related papers (2024-12-10T08:07:19Z) - An Effective and Resilient Backdoor Attack Framework against Deep Neural Networks and Vision Transformers [22.77836113915616]
We propose a novel attention-based mask generation methodology that searches for the optimal trigger shape and location.
We also introduce a Quality-of-Experience term into the loss function and carefully adjust the transparency value of the trigger.
Our proposed backdoor attack framework also showcases robustness against state-of-the-art backdoor defenses.
arXiv Detail & Related papers (2024-12-09T02:03:27Z) - Uncovering, Explaining, and Mitigating the Superficial Safety of Backdoor Defense [27.471096446155933]
We investigate the Post-Purification Robustness of current backdoor purification methods.
We find that current safety purification methods are vulnerable to the rapid re-learning of backdoor behavior.
We propose a tuning defense, Path-Aware Minimization (PAM), which promotes deviation along backdoor-connected paths with extra model updates.
arXiv Detail & Related papers (2024-10-13T13:37:36Z) - Face Reconstruction Transfer Attack as Out-of-Distribution Generalization [15.258162177124317]
We aim to reconstruct face images which are capable of transferring face attacks on unseen encoders.
Inspired by its OOD nature, we propose to solve Face Reconstruction Transfer Attack (FRTA) by Averaged Latent Search and Unsupervised Validation with pseudo target (ALSUV)
arXiv Detail & Related papers (2024-07-02T16:21:44Z) - BadCLIP: Dual-Embedding Guided Backdoor Attack on Multimodal Contrastive
Learning [85.2564206440109]
This paper reveals the threats in this practical scenario that backdoor attacks can remain effective even after defenses.
We introduce the emphtoolns attack, which is resistant to backdoor detection and model fine-tuning defenses.
arXiv Detail & Related papers (2023-11-20T02:21:49Z) - Backdoor Attacks Against Deep Image Compression via Adaptive Frequency
Trigger [106.10954454667757]
We present a novel backdoor attack with multiple triggers against learned image compression models.
Motivated by the widely used discrete cosine transform (DCT) in existing compression systems and standards, we propose a frequency-based trigger injection model.
arXiv Detail & Related papers (2023-02-28T15:39:31Z) - Untargeted Backdoor Attack against Object Detection [69.63097724439886]
We design a poison-only backdoor attack in an untargeted manner, based on task characteristics.
We show that, once the backdoor is embedded into the target model by our attack, it can trick the model to lose detection of any object stamped with our trigger patterns.
arXiv Detail & Related papers (2022-11-02T17:05:45Z) - Backdoor Defense via Suppressing Model Shortcuts [91.30995749139012]
In this paper, we explore the backdoor mechanism from the angle of the model structure.
We demonstrate that the attack success rate (ASR) decreases significantly when reducing the outputs of some key skip connections.
arXiv Detail & Related papers (2022-11-02T15:39:19Z) - Backdoor Defense with Machine Unlearning [32.968653927933296]
We propose BAERASE, a novel method that can erase the backdoor injected into the victim model through machine unlearning.
BAERASE can averagely lower the attack success rates of three kinds of state-of-the-art backdoor attacks by 99% on four benchmark datasets.
arXiv Detail & Related papers (2022-01-24T09:09:12Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.