RobFR: Benchmarking Adversarial Robustness on Face Recognition
- URL: http://arxiv.org/abs/2007.04118v2
- Date: Wed, 29 Sep 2021 08:01:13 GMT
- Title: RobFR: Benchmarking Adversarial Robustness on Face Recognition
- Authors: Xiao Yang, Dingcheng Yang, Yinpeng Dong, Hang Su, Wenjian Yu, Jun Zhu
- Abstract summary: Face recognition (FR) has recently made substantial progress and achieved high accuracy on standard benchmarks.
To facilitate a better understanding of the adversarial vulnerability on FR, we develop an adversarial robustness evaluation library on FR named textbfRobFR.
RobFR involves 15 popular naturally trained FR models, 9 models with representative defense mechanisms and 2 commercial FR API services.
- Score: 41.296221656624716
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Face recognition (FR) has recently made substantial progress and achieved
high accuracy on standard benchmarks. However, it has raised security concerns
in enormous FR applications because deep CNNs are unusually vulnerable to
adversarial examples, and it is still lack of a comprehensive robustness
evaluation before a FR model is deployed in safety-critical scenarios. To
facilitate a better understanding of the adversarial vulnerability on FR, we
develop an adversarial robustness evaluation library on FR named
\textbf{RobFR}, which serves as a reference for evaluating the robustness of
downstream tasks. Specifically, RobFR involves 15 popular naturally trained FR
models, 9 models with representative defense mechanisms and 2 commercial FR API
services, to perform the robustness evaluation by using various adversarial
attacks as an important surrogate. The evaluations are conducted under diverse
adversarial settings in terms of dodging and impersonation, $\ell_2$ and
$\ell_\infty$, as well as white-box and black-box attacks. We further propose a
landmark-guided cutout (LGC) attack method to improve the transferability of
adversarial examples for black-box attacks by considering the special
characteristics of FR. Based on large-scale evaluations, the commercial FR API
services fail to exhibit acceptable performance on robustness evaluation, and
we also draw several important conclusions for understanding the adversarial
robustness of FR models and providing insights for the design of robust FR
models. RobFR is open-source and maintains all extendable modules, i.e.,
\emph{Datasets}, \emph{FR Models}, \emph{Attacks\&Defenses}, and
\emph{Evaluations} at
\url{https://github.com/ShawnXYang/Face-Robustness-Benchmark}, which will be
continuously updated to promote future research on robust FR.
Related papers
- MISLEADER: Defending against Model Extraction with Ensembles of Distilled Models [56.09354775405601]
Model extraction attacks aim to replicate the functionality of a black-box model through query access.<n>Most existing defenses presume that attacker queries have out-of-distribution (OOD) samples, enabling them to detect and disrupt suspicious inputs.<n>We propose MISLEADER, a novel defense strategy that does not rely on OOD assumptions.
arXiv Detail & Related papers (2025-06-03T01:37:09Z) - Rethinking Byzantine Robustness in Federated Recommendation from Sparse Aggregation Perspective [65.65471972217814]
federated recommendation (FR) based on federated learning (FL) emerges, keeping the personal data on the local client and updating a model collaboratively.
FR has a unique sparse aggregation mechanism, where the embedding of each item is updated by only partial clients, instead of full clients in a dense aggregation of general FL.
In this paper, we reformulate the Byzantine robustness under sparse aggregation by defining the aggregation for a single item as the smallest execution unit.
We propose a family of effective attack strategies, named Spattack, which exploit the vulnerability in sparse aggregation and are categorized along the adversary's knowledge and capability.
arXiv Detail & Related papers (2025-01-06T15:19:26Z) - Formal Logic-guided Robust Federated Learning against Poisoning Attacks [6.997975378492098]
Federated Learning (FL) offers a promising solution to the privacy concerns associated with centralized Machine Learning (ML)
FL is vulnerable to various security threats, including poisoning attacks, where adversarial clients manipulate the training data or model updates to degrade overall model performance.
We present a defense mechanism designed to mitigate poisoning attacks in federated learning for time-series tasks.
arXiv Detail & Related papers (2024-11-05T16:23:19Z) - Certifiably Byzantine-Robust Federated Conformal Prediction [49.23374238798428]
We introduce a novel framework Rob-FCP, which executes robust federated conformal prediction effectively countering malicious clients.
We empirically demonstrate the robustness of Rob-FCP against diverse proportions of malicious clients under a variety of Byzantine attacks.
arXiv Detail & Related papers (2024-06-04T04:43:30Z) - Adversarial Attacks on Both Face Recognition and Face Anti-spoofing Models [14.821326139376266]
We introduce a novel attack setting that targets both Face Recognition (FR) and Face Anti-Spoofing (FAS) models simultaneously.<n> Specifically, we propose a new attack method, termed Reference-free Multi-level Alignment (RMA), designed to improve the capacity of black-box attacks on both FR and FAS models.
arXiv Detail & Related papers (2024-05-27T08:30:29Z) - FLIRT: Feedback Loop In-context Red Teaming [79.63896510559357]
We propose an automatic red teaming framework that evaluates a given black-box model and exposes its vulnerabilities.
Our framework uses in-context learning in a feedback loop to red team models and trigger them into unsafe content generation.
arXiv Detail & Related papers (2023-08-08T14:03:08Z) - SureFED: Robust Federated Learning via Uncertainty-Aware Inward and
Outward Inspection [29.491675102478798]
We introduce SureFED, a novel framework for robust federated learning.
SureFED establishes trust using the local information of benign clients.
We theoretically prove the robustness of our algorithm against data and model poisoning attacks.
arXiv Detail & Related papers (2023-08-04T23:51:05Z) - Doubly Robust Instance-Reweighted Adversarial Training [107.40683655362285]
We propose a novel doubly-robust instance reweighted adversarial framework.
Our importance weights are obtained by optimizing the KL-divergence regularized loss function.
Our proposed approach outperforms related state-of-the-art baseline methods in terms of average robust performance.
arXiv Detail & Related papers (2023-08-01T06:16:18Z) - From Adversarial Arms Race to Model-centric Evaluation: Motivating a
Unified Automatic Robustness Evaluation Framework [91.94389491920309]
Textual adversarial attacks can discover models' weaknesses by adding semantic-preserved but misleading perturbations to the inputs.
The existing practice of robustness evaluation may exhibit issues of incomprehensive evaluation, impractical evaluation protocol, and invalid adversarial samples.
We set up a unified automatic robustness evaluation framework, shifting towards model-centric evaluation to exploit the advantages of adversarial attacks.
arXiv Detail & Related papers (2023-05-29T14:55:20Z) - FROB: Few-shot ROBust Model for Classification and Out-of-Distribution
Detection [0.0]
Few-shot ROBust (FROB) is a model for classification and few-shot OoD detection.
We propose a self-supervised learning few-shot confidence boundary methodology.
FROB achieves competitive performance and outperforms benchmarks in terms of robustness to the few-shot sample population and variability.
arXiv Detail & Related papers (2021-11-30T15:20:44Z) - Enhanced countering adversarial attacks via input denoising and feature
restoring [15.787838084050957]
Deep neural networks (DNNs) are vulnerable to adversarial examples/samples (AEs) with imperceptible perturbations in clean/original samples.
This paper presents an enhanced countering adversarial attack method IDFR (via Input Denoising and Feature Restoring)
The proposed IDFR is made up of an enhanced input denoiser (ID) and a hidden lossy feature restorer (FR) based on the convex hull optimization.
arXiv Detail & Related papers (2021-11-19T07:34:09Z) - RobustBench: a standardized adversarial robustness benchmark [84.50044645539305]
Key challenge in benchmarking robustness is that its evaluation is often error-prone leading to robustness overestimation.
We evaluate adversarial robustness with AutoAttack, an ensemble of white- and black-box attacks.
We analyze the impact of robustness on the performance on distribution shifts, calibration, out-of-distribution detection, fairness, privacy leakage, smoothness, and transferability.
arXiv Detail & Related papers (2020-10-19T17:06:18Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.