RADAP: A Robust and Adaptive Defense Against Diverse Adversarial Patches
on Face Recognition
- URL: http://arxiv.org/abs/2311.17339v1
- Date: Wed, 29 Nov 2023 03:37:14 GMT
- Title: RADAP: A Robust and Adaptive Defense Against Diverse Adversarial Patches
on Face Recognition
- Authors: Xiaoliang Liu, Furao Shen, Jian Zhao, Changhai Nie
- Abstract summary: Face recognition systems powered by deep learning are vulnerable to adversarial attacks.
We propose RADAP, a robust and adaptive defense mechanism against diverse adversarial patches.
We conduct comprehensive experiments to validate the effectiveness of RADAP.
- Score: 13.618387142029663
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Face recognition (FR) systems powered by deep learning have become widely
used in various applications. However, they are vulnerable to adversarial
attacks, especially those based on local adversarial patches that can be
physically applied to real-world objects. In this paper, we propose RADAP, a
robust and adaptive defense mechanism against diverse adversarial patches in
both closed-set and open-set FR systems. RADAP employs innovative techniques,
such as FCutout and F-patch, which use Fourier space sampling masks to improve
the occlusion robustness of the FR model and the performance of the patch
segmenter. Moreover, we introduce an edge-aware binary cross-entropy (EBCE)
loss function to enhance the accuracy of patch detection. We also present the
split and fill (SAF) strategy, which is designed to counter the vulnerability
of the patch segmenter to complete white-box adaptive attacks. We conduct
comprehensive experiments to validate the effectiveness of RADAP, which shows
significant improvements in defense performance against various adversarial
patches, while maintaining clean accuracy higher than that of the undefended
Vanilla model.
Related papers
- Real-world Adversarial Defense against Patch Attacks based on Diffusion Model [34.86098237949215]
This paper introduces DIFFender, a novel DIFfusion-based DeFender framework to counter adversarial patch attacks.
At the core of our approach is the discovery of the Adversarial Anomaly Perception (AAP) phenomenon.
DIFFender seamlessly integrates the tasks of patch localization and restoration within a unified diffusion model framework.
arXiv Detail & Related papers (2024-09-14T10:38:35Z) - Iterative Window Mean Filter: Thwarting Diffusion-based Adversarial Purification [26.875621618432504]
Face authentication systems have become unreliable due to their sensitivity to inconspicuous perturbations, such as adversarial attacks.
We have developed a novel and highly efficient non-deep-learning-based image filter called the Iterative Window Mean Filter (IWMF)
We propose a new framework for adversarial purification, named IWMF-Diff, which integrates IWMF and denoising diffusion models.
arXiv Detail & Related papers (2024-08-20T09:19:43Z) - Towards Robust Semantic Segmentation against Patch-based Attack via Attention Refinement [68.31147013783387]
We observe that the attention mechanism is vulnerable to patch-based adversarial attacks.
In this paper, we propose a Robust Attention Mechanism (RAM) to improve the robustness of the semantic segmentation model.
arXiv Detail & Related papers (2024-01-03T13:58:35Z) - DIFFender: Diffusion-Based Adversarial Defense against Patch Attacks [34.86098237949214]
Adversarial attacks, particularly patch attacks, pose significant threats to the robustness and reliability of deep learning models.
This paper introduces DIFFender, a novel defense framework that harnesses the capabilities of a text-guided diffusion model to combat patch attacks.
DIFFender integrates dual tasks of patch localization and restoration within a single diffusion model framework.
arXiv Detail & Related papers (2023-06-15T13:33:27Z) - Avoid Adversarial Adaption in Federated Learning by Multi-Metric
Investigations [55.2480439325792]
Federated Learning (FL) facilitates decentralized machine learning model training, preserving data privacy, lowering communication costs, and boosting model performance through diversified data sources.
FL faces vulnerabilities such as poisoning attacks, undermining model integrity with both untargeted performance degradation and targeted backdoor attacks.
We define a new notion of strong adaptive adversaries, capable of adapting to multiple objectives simultaneously.
MESAS is the first defense robust against strong adaptive adversaries, effective in real-world data scenarios, with an average overhead of just 24.37 seconds.
arXiv Detail & Related papers (2023-06-06T11:44:42Z) - Simultaneously Optimizing Perturbations and Positions for Black-box
Adversarial Patch Attacks [13.19708582519833]
Adversarial patch is an important form of real-world adversarial attack that brings serious risks to the robustness of deep neural networks.
Previous methods generate adversarial patches by either optimizing their perturbation values while fixing the pasting position or manipulating the position while fixing the patch's content.
We propose a novel method to simultaneously optimize the position and perturbation for an adversarial patch, and thus obtain a high attack success rate in the black-box setting.
arXiv Detail & Related papers (2022-12-26T02:48:37Z) - Improving Adversarial Robustness to Sensitivity and Invariance Attacks
with Deep Metric Learning [80.21709045433096]
A standard method in adversarial robustness assumes a framework to defend against samples crafted by minimally perturbing a sample.
We use metric learning to frame adversarial regularization as an optimal transport problem.
Our preliminary results indicate that regularizing over invariant perturbations in our framework improves both invariant and sensitivity defense.
arXiv Detail & Related papers (2022-11-04T13:54:02Z) - Benchmarking Adversarial Patch Against Aerial Detection [11.591143898488312]
A novel adaptive-patch-based physical attack (AP-PA) framework is proposed.
AP-PA generates adversarial patches that are adaptive in both physical dynamics and varying scales.
We establish one of the first comprehensive, coherent, and rigorous benchmarks to evaluate the attack efficacy of adversarial patches on aerial detection tasks.
arXiv Detail & Related papers (2022-10-30T07:55:59Z) - Adaptive Feature Alignment for Adversarial Training [56.17654691470554]
CNNs are typically vulnerable to adversarial attacks, which pose a threat to security-sensitive applications.
We propose the adaptive feature alignment (AFA) to generate features of arbitrary attacking strengths.
Our method is trained to automatically align features of arbitrary attacking strength.
arXiv Detail & Related papers (2021-05-31T17:01:05Z) - A Self-supervised Approach for Adversarial Robustness [105.88250594033053]
Adversarial examples can cause catastrophic mistakes in Deep Neural Network (DNNs) based vision systems.
This paper proposes a self-supervised adversarial training mechanism in the input space.
It provides significant robustness against the textbfunseen adversarial attacks.
arXiv Detail & Related papers (2020-06-08T20:42:39Z) - Boosting Adversarial Training with Hypersphere Embedding [53.75693100495097]
Adversarial training is one of the most effective defenses against adversarial attacks for deep learning models.
In this work, we advocate incorporating the hypersphere embedding mechanism into the AT procedure.
We validate our methods under a wide range of adversarial attacks on the CIFAR-10 and ImageNet datasets.
arXiv Detail & Related papers (2020-02-20T08:42:29Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.