Towards Assessing and Characterizing the Semantic Robustness of Face
Recognition
- URL: http://arxiv.org/abs/2202.04978v1
- Date: Thu, 10 Feb 2022 12:22:09 GMT
- Title: Towards Assessing and Characterizing the Semantic Robustness of Face
Recognition
- Authors: Juan C. P\'erez, Motasem Alfarra, Ali Thabet, Pablo Arbel\'aez,
Bernard Ghanem
- Abstract summary: Face Recognition Models (FRMs) based on Deep Neural Networks (DNNs) inherit this vulnerability.
We propose a methodology for assessing and characterizing the robustness of FRMs against semantic perturbations to their input.
- Score: 55.258476405537344
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Deep Neural Networks (DNNs) lack robustness against imperceptible
perturbations to their input. Face Recognition Models (FRMs) based on DNNs
inherit this vulnerability. We propose a methodology for assessing and
characterizing the robustness of FRMs against semantic perturbations to their
input. Our methodology causes FRMs to malfunction by designing adversarial
attacks that search for identity-preserving modifications to faces. In
particular, given a face, our attacks find identity-preserving variants of the
face such that an FRM fails to recognize the images belonging to the same
identity. We model these identity-preserving semantic modifications via
direction- and magnitude-constrained perturbations in the latent space of
StyleGAN. We further propose to characterize the semantic robustness of an FRM
by statistically describing the perturbations that induce the FRM to
malfunction. Finally, we combine our methodology with a certification
technique, thus providing (i) theoretical guarantees on the performance of an
FRM, and (ii) a formal description of how an FRM may model the notion of face
identity.
Related papers
- Improving the Robustness of Representation Misdirection for Large Language Model Unlearning [6.745464488913924]
Representation Misdirection (RM) and variants are established large language model (LLM) unlearning methods with state-of-the-art performance.
We show that RM methods inherently reduce models' robustness, causing them to misbehave even when a single non-adversarial forget-token is in the retain-query.
We propose Random Noise Augmentation -- a model and method approach with theoretical guarantees for improving the robustness of RM methods.
arXiv Detail & Related papers (2025-01-31T15:12:20Z) - ErasableMask: A Robust and Erasable Privacy Protection Scheme against Black-box Face Recognition Models [14.144010156851273]
We propose ErasableMask, a robust and erasable privacy protection scheme against black-box FR models.
Specifically, ErasableMask introduces a novel meta-auxiliary attack, which boosts black-box transferability.
It also offers a perturbation erasion mechanism that supports the erasion of semantic perturbations in protected face without degrading image quality.
arXiv Detail & Related papers (2024-12-22T14:30:26Z) - Local Features Meet Stochastic Anonymization: Revolutionizing Privacy-Preserving Face Recognition for Black-Box Models [54.88064975480573]
The task of privacy-preserving face recognition (PPFR) currently faces two major unsolved challenges.
By disrupting global features while enhancing local features, we achieve effective recognition even in black-box environments.
Our method achieves an average recognition accuracy of 94.21% on black-box models, outperforming existing methods in both privacy protection and anti-reconstruction capabilities.
arXiv Detail & Related papers (2024-12-11T10:49:15Z) - Improving Adversarial Robustness via Feature Pattern Consistency Constraint [42.50500608175905]
Convolutional Neural Networks (CNNs) are well-known for their vulnerability to adversarial attacks, posing significant security concerns.
Most existing methods either focus on learning from adversarial perturbations, leading to overfitting to the adversarial examples, or aim to eliminate such perturbations during inference.
We introduce a novel and effective Feature Pattern Consistency Constraint (FPCC) method to reinforce the latent feature's capacity to maintain the correct feature pattern.
arXiv Detail & Related papers (2024-06-13T05:38:30Z) - Adv-Diffusion: Imperceptible Adversarial Face Identity Attack via Latent
Diffusion Model [61.53213964333474]
We propose a unified framework Adv-Diffusion that can generate imperceptible adversarial identity perturbations in the latent space but not the raw pixel space.
Specifically, we propose the identity-sensitive conditioned diffusion generative model to generate semantic perturbations in the surroundings.
The designed adaptive strength-based adversarial perturbation algorithm can ensure both attack transferability and stealthiness.
arXiv Detail & Related papers (2023-12-18T15:25:23Z) - HFORD: High-Fidelity and Occlusion-Robust De-identification for Face
Privacy Protection [60.63915939982923]
Face de-identification is a practical way to solve the identity protection problem.
The existing facial de-identification methods have revealed several problems.
We present a High-Fidelity and Occlusion-Robust De-identification (HFORD) method to deal with these issues.
arXiv Detail & Related papers (2023-11-15T08:59:02Z) - Low-Mid Adversarial Perturbation against Unauthorized Face Recognition
System [20.979192130022334]
We propose a novel solution referred to as emphlow frequency adversarial perturbation (LFAP)
This method conditions the source model to leverage low-frequency characteristics through adversarial training.
We also introduce an improved emphlow-mid frequency adversarial perturbation (LMFAP) that incorporates mid-frequency components for an additive benefit.
arXiv Detail & Related papers (2022-06-19T14:15:49Z) - Reinforcement Learning with a Terminator [80.34572413850186]
We learn the parameters of the TerMDP and leverage the structure of the estimation problem to provide state-wise confidence bounds.
We use these to construct a provably-efficient algorithm, which accounts for termination, and bound its regret.
arXiv Detail & Related papers (2022-05-30T18:40:28Z) - Trust but Verify: Assigning Prediction Credibility by Counterfactual
Constrained Learning [123.3472310767721]
Prediction credibility measures are fundamental in statistics and machine learning.
These measures should account for the wide variety of models used in practice.
The framework developed in this work expresses the credibility as a risk-fit trade-off.
arXiv Detail & Related papers (2020-11-24T19:52:38Z) - Domain Private and Agnostic Feature for Modality Adaptive Face
Recognition [10.497190559654245]
This paper proposes a Feature Aggregation Network (FAN), which includes disentangled representation module (DRM), feature fusion module (FFM) and metric penalty learning session.
First, in DRM, twoworks, i.e. domain-private network and domain-agnostic network are specially designed for learning modality features and identity features.
Second, in FFM, the identity features are fused with domain features to achieve cross-modal bi-directional identity feature transformation.
Third, considering that the distribution imbalance between easy and hard pairs exists in cross-modal datasets, the identity preserving guided metric learning with adaptive
arXiv Detail & Related papers (2020-08-10T00:59:42Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.