FairDeFace: Evaluating the Fairness and Adversarial Robustness of Face Obfuscation Methods
- URL: http://arxiv.org/abs/2503.08731v1
- Date: Tue, 11 Mar 2025 01:49:43 GMT
- Title: FairDeFace: Evaluating the Fairness and Adversarial Robustness of Face Obfuscation Methods
- Authors: Seyyed Mohammad Sadegh Moosavi Khorzooghi, Poojitha Thota, Mohit Singhal, Abolfazl Asudeh, Gautam Das, Shirin Nilizadeh,
- Abstract summary: This paper introduces a comprehensive framework, named FairDeFace, designed to assess the adversarial robustness and fairness of face obfuscation methods.<n>The framework introduces a set of modules encompassing data benchmarks, face detection and recognition algorithms, adversarial models, utility detection models, and fairness metrics.<n>In its current implementation, FairDeFace incorporates 6 attacks, and several privacy, utility and fairness metrics.
- Score: 11.0796763268436
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: The lack of a common platform and benchmark datasets for evaluating face obfuscation methods has been a challenge, with every method being tested using arbitrary experiments, datasets, and metrics. While prior work has demonstrated that face recognition systems exhibit bias against some demographic groups, there exists a substantial gap in our understanding regarding the fairness of face obfuscation methods. Providing fair face obfuscation methods can ensure equitable protection across diverse demographic groups, especially since they can be used to preserve the privacy of vulnerable populations. To address these gaps, this paper introduces a comprehensive framework, named FairDeFace, designed to assess the adversarial robustness and fairness of face obfuscation methods. The framework introduces a set of modules encompassing data benchmarks, face detection and recognition algorithms, adversarial models, utility detection models, and fairness metrics. FairDeFace serves as a versatile platform where any face obfuscation method can be integrated, allowing for rigorous testing and comparison with other state-of-the-art methods. In its current implementation, FairDeFace incorporates 6 attacks, and several privacy, utility and fairness metrics. Using FairDeFace, and by conducting more than 500 experiments, we evaluated and compared the adversarial robustness of seven face obfuscation methods. This extensive analysis led to many interesting findings both in terms of the degree of robustness of existing methods and their biases against some gender or racial groups. FairDeFace also uses visualization of focused areas for both obfuscation and verification attacks to show not only which areas are mostly changed in the obfuscation process for some demographics, but also why they failed through focus area comparison of obfuscation and verification.
Related papers
- Local Features Meet Stochastic Anonymization: Revolutionizing Privacy-Preserving Face Recognition for Black-Box Models [54.88064975480573]
The task of privacy-preserving face recognition (PPFR) currently faces two major unsolved challenges.<n>By disrupting global features while enhancing local features, we achieve effective recognition even in black-box environments.<n>Our method achieves an average recognition accuracy of 94.21% on black-box models, outperforming existing methods in both privacy protection and anti-reconstruction capabilities.
arXiv Detail & Related papers (2024-12-11T10:49:15Z) - Imperceptible Face Forgery Attack via Adversarial Semantic Mask [59.23247545399068]
We propose an Adversarial Semantic Mask Attack framework (ASMA) which can generate adversarial examples with good transferability and invisibility.
Specifically, we propose a novel adversarial semantic mask generative model, which can constrain generated perturbations in local semantic regions for good stealthiness.
arXiv Detail & Related papers (2024-06-16T10:38:11Z) - Exploring Decision-based Black-box Attacks on Face Forgery Detection [53.181920529225906]
Face forgery generation technologies generate vivid faces, which have raised public concerns about security and privacy.
Although face forgery detection has successfully distinguished fake faces, recent studies have demonstrated that face forgery detectors are very vulnerable to adversarial examples.
arXiv Detail & Related papers (2023-10-18T14:49:54Z) - Watch Out for the Confusing Faces: Detecting Face Swapping with the
Probability Distribution of Face Identification Models [37.49012763328351]
We propose a novel face swapping detection approach based on face identification probability distributions.
IdP_FSD is specially designed for detecting swapped faces whose identities belong to a finite set.
IdP_FSD exploits face swapping's common nature that the identity of swapped face combines that of two faces involved in swapping.
arXiv Detail & Related papers (2023-03-23T09:33:10Z) - Information-containing Adversarial Perturbation for Combating Facial
Manipulation Systems [19.259372985094235]
Malicious applications of deep learning systems pose a serious threat to individuals' privacy and reputation.
We propose a novel two-tier protection method named Information-containing Adversarial Perturbation (IAP)
We use an encoder to map a facial image and its identity message to a cross-model adversarial example which can disrupt multiple facial manipulation systems.
arXiv Detail & Related papers (2023-03-21T06:48:14Z) - FedForgery: Generalized Face Forgery Detection with Residual Federated
Learning [87.746829550726]
Existing face forgery detection methods directly utilize the obtained public shared or centralized data for training.
The paper proposes a novel generalized residual Federated learning for face Forgery detection (FedForgery)
Experiments conducted on publicly available face forgery detection datasets prove the superior performance of the proposed FedForgery.
arXiv Detail & Related papers (2022-10-18T03:32:18Z) - DuetFace: Collaborative Privacy-Preserving Face Recognition via Channel
Splitting in the Frequency Domain [23.4606547767188]
DuetFace is a privacy-preserving face recognition method that employs collaborative inference in the frequency domain.
The proposed method achieves a comparable recognition accuracy and cost to the unprotected ArcFace and outperforms the state-of-the-art privacy-preserving methods.
arXiv Detail & Related papers (2022-07-15T08:35:44Z) - Restricted Black-box Adversarial Attack Against DeepFake Face Swapping [70.82017781235535]
We introduce a practical adversarial attack that does not require any queries to the facial image forgery model.
Our method is built on a substitute model persuing for face reconstruction and then transfers adversarial examples from the substitute model directly to inaccessible black-box DeepFake models.
arXiv Detail & Related papers (2022-04-26T14:36:06Z) - Fairness Properties of Face Recognition and Obfuscation Systems [19.195705814819306]
Face obfuscation systems generate imperceptible perturbations, when added to an image, cause the facial recognition system to misidentify the user.
This dependence of face obfuscation on metric embedding networks, which are known to be unfair in the context of facial recognition, surfaces the question of demographic fairness.
We find that metric embedding networks are demographically aware; they cluster faces in the embedding space based on their demographic attributes.
arXiv Detail & Related papers (2021-08-05T16:18:15Z) - Robust Face-Swap Detection Based on 3D Facial Shape Information [59.32489266682952]
Face-swap images and videos have attracted more and more malicious attackers to discredit some key figures.
Previous pixel-level artifacts based detection techniques always focus on some unclear patterns but ignore some available semantic clues.
We propose a biometric information based method to fully exploit the appearance and shape feature for face-swap detection of key figures.
arXiv Detail & Related papers (2021-04-28T09:35:48Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.