Towards Assessing and Characterizing the Semantic Robustness of Face
Recognition
- URL: http://arxiv.org/abs/2202.04978v1
- Date: Thu, 10 Feb 2022 12:22:09 GMT
- Title: Towards Assessing and Characterizing the Semantic Robustness of Face
Recognition
- Authors: Juan C. P\'erez, Motasem Alfarra, Ali Thabet, Pablo Arbel\'aez,
Bernard Ghanem
- Abstract summary: Face Recognition Models (FRMs) based on Deep Neural Networks (DNNs) inherit this vulnerability.
We propose a methodology for assessing and characterizing the robustness of FRMs against semantic perturbations to their input.
- Score: 55.258476405537344
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Deep Neural Networks (DNNs) lack robustness against imperceptible
perturbations to their input. Face Recognition Models (FRMs) based on DNNs
inherit this vulnerability. We propose a methodology for assessing and
characterizing the robustness of FRMs against semantic perturbations to their
input. Our methodology causes FRMs to malfunction by designing adversarial
attacks that search for identity-preserving modifications to faces. In
particular, given a face, our attacks find identity-preserving variants of the
face such that an FRM fails to recognize the images belonging to the same
identity. We model these identity-preserving semantic modifications via
direction- and magnitude-constrained perturbations in the latent space of
StyleGAN. We further propose to characterize the semantic robustness of an FRM
by statistically describing the perturbations that induce the FRM to
malfunction. Finally, we combine our methodology with a certification
technique, thus providing (i) theoretical guarantees on the performance of an
FRM, and (ii) a formal description of how an FRM may model the notion of face
identity.
Related papers
- Improving Adversarial Robustness via Feature Pattern Consistency Constraint [42.50500608175905]
Convolutional Neural Networks (CNNs) are well-known for their vulnerability to adversarial attacks, posing significant security concerns.
Most existing methods either focus on learning from adversarial perturbations, leading to overfitting to the adversarial examples, or aim to eliminate such perturbations during inference.
We introduce a novel and effective Feature Pattern Consistency Constraint (FPCC) method to reinforce the latent feature's capacity to maintain the correct feature pattern.
arXiv Detail & Related papers (2024-06-13T05:38:30Z) - Adv-Diffusion: Imperceptible Adversarial Face Identity Attack via Latent
Diffusion Model [61.53213964333474]
We propose a unified framework Adv-Diffusion that can generate imperceptible adversarial identity perturbations in the latent space but not the raw pixel space.
Specifically, we propose the identity-sensitive conditioned diffusion generative model to generate semantic perturbations in the surroundings.
The designed adaptive strength-based adversarial perturbation algorithm can ensure both attack transferability and stealthiness.
arXiv Detail & Related papers (2023-12-18T15:25:23Z) - HFORD: High-Fidelity and Occlusion-Robust De-identification for Face
Privacy Protection [60.63915939982923]
Face de-identification is a practical way to solve the identity protection problem.
The existing facial de-identification methods have revealed several problems.
We present a High-Fidelity and Occlusion-Robust De-identification (HFORD) method to deal with these issues.
arXiv Detail & Related papers (2023-11-15T08:59:02Z) - Towards Imperceptible Document Manipulations against Neural Ranking
Models [13.777462017782659]
We propose a framework called Imperceptible DocumEnt Manipulation (IDEM) to produce adversarial documents.
IDEM instructs a well-established generative language model, such as BART, to generate connection sentences without introducing easy-to-detect errors.
We show that IDEM can outperform strong baselines while preserving fluency and correctness of the target documents.
arXiv Detail & Related papers (2023-05-03T02:09:29Z) - Low-Mid Adversarial Perturbation against Unauthorized Face Recognition
System [20.979192130022334]
We propose a novel solution referred to as emphlow frequency adversarial perturbation (LFAP)
This method conditions the source model to leverage low-frequency characteristics through adversarial training.
We also introduce an improved emphlow-mid frequency adversarial perturbation (LMFAP) that incorporates mid-frequency components for an additive benefit.
arXiv Detail & Related papers (2022-06-19T14:15:49Z) - Reinforcement Learning with a Terminator [80.34572413850186]
We learn the parameters of the TerMDP and leverage the structure of the estimation problem to provide state-wise confidence bounds.
We use these to construct a provably-efficient algorithm, which accounts for termination, and bound its regret.
arXiv Detail & Related papers (2022-05-30T18:40:28Z) - Fair SA: Sensitivity Analysis for Fairness in Face Recognition [1.7149364927872013]
We propose a new fairness evaluation based on robustness in the form of a generic framework.
We analyze the performance of common face recognition models and empirically show that certain subgroups are at a disadvantage when images are perturbed.
arXiv Detail & Related papers (2022-02-08T01:16:09Z) - Trust but Verify: Assigning Prediction Credibility by Counterfactual
Constrained Learning [123.3472310767721]
Prediction credibility measures are fundamental in statistics and machine learning.
These measures should account for the wide variety of models used in practice.
The framework developed in this work expresses the credibility as a risk-fit trade-off.
arXiv Detail & Related papers (2020-11-24T19:52:38Z) - Domain Private and Agnostic Feature for Modality Adaptive Face
Recognition [10.497190559654245]
This paper proposes a Feature Aggregation Network (FAN), which includes disentangled representation module (DRM), feature fusion module (FFM) and metric penalty learning session.
First, in DRM, twoworks, i.e. domain-private network and domain-agnostic network are specially designed for learning modality features and identity features.
Second, in FFM, the identity features are fused with domain features to achieve cross-modal bi-directional identity feature transformation.
Third, considering that the distribution imbalance between easy and hard pairs exists in cross-modal datasets, the identity preserving guided metric learning with adaptive
arXiv Detail & Related papers (2020-08-10T00:59:42Z) - Towards Transferable Adversarial Attack against Deep Face Recognition [58.07786010689529]
Deep convolutional neural networks (DCNNs) have been found to be vulnerable to adversarial examples.
transferable adversarial examples can severely hinder the robustness of DCNNs.
We propose DFANet, a dropout-based method used in convolutional layers, which can increase the diversity of surrogate models.
We generate a new set of adversarial face pairs that can successfully attack four commercial APIs without any queries.
arXiv Detail & Related papers (2020-04-13T06:44:33Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.