RobFR: Benchmarking Adversarial Robustness on Face Recognition
- URL: http://arxiv.org/abs/2007.04118v2
- Date: Wed, 29 Sep 2021 08:01:13 GMT
- Title: RobFR: Benchmarking Adversarial Robustness on Face Recognition
- Authors: Xiao Yang, Dingcheng Yang, Yinpeng Dong, Hang Su, Wenjian Yu, Jun Zhu
- Abstract summary: Face recognition (FR) has recently made substantial progress and achieved high accuracy on standard benchmarks.
To facilitate a better understanding of the adversarial vulnerability on FR, we develop an adversarial robustness evaluation library on FR named textbfRobFR.
RobFR involves 15 popular naturally trained FR models, 9 models with representative defense mechanisms and 2 commercial FR API services.
- Score: 41.296221656624716
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Face recognition (FR) has recently made substantial progress and achieved
high accuracy on standard benchmarks. However, it has raised security concerns
in enormous FR applications because deep CNNs are unusually vulnerable to
adversarial examples, and it is still lack of a comprehensive robustness
evaluation before a FR model is deployed in safety-critical scenarios. To
facilitate a better understanding of the adversarial vulnerability on FR, we
develop an adversarial robustness evaluation library on FR named
\textbf{RobFR}, which serves as a reference for evaluating the robustness of
downstream tasks. Specifically, RobFR involves 15 popular naturally trained FR
models, 9 models with representative defense mechanisms and 2 commercial FR API
services, to perform the robustness evaluation by using various adversarial
attacks as an important surrogate. The evaluations are conducted under diverse
adversarial settings in terms of dodging and impersonation, $\ell_2$ and
$\ell_\infty$, as well as white-box and black-box attacks. We further propose a
landmark-guided cutout (LGC) attack method to improve the transferability of
adversarial examples for black-box attacks by considering the special
characteristics of FR. Based on large-scale evaluations, the commercial FR API
services fail to exhibit acceptable performance on robustness evaluation, and
we also draw several important conclusions for understanding the adversarial
robustness of FR models and providing insights for the design of robust FR
models. RobFR is open-source and maintains all extendable modules, i.e.,
\emph{Datasets}, \emph{FR Models}, \emph{Attacks\&Defenses}, and
\emph{Evaluations} at
\url{https://github.com/ShawnXYang/Face-Robustness-Benchmark}, which will be
continuously updated to promote future research on robust FR.
Related papers
- Certifiably Byzantine-Robust Federated Conformal Prediction [49.23374238798428]
We introduce a novel framework Rob-FCP, which executes robust federated conformal prediction effectively countering malicious clients.
We empirically demonstrate the robustness of Rob-FCP against diverse proportions of malicious clients under a variety of Byzantine attacks.
arXiv Detail & Related papers (2024-06-04T04:43:30Z) - The Effectiveness of Random Forgetting for Robust Generalization [21.163070161951868]
We introduce a novel learning paradigm called "Forget to Mitigate Overfitting" (FOMO)
FOMO alternates between the forgetting phase, which randomly forgets a subset of weights, and the relearning phase, which emphasizes learning generalizable features.
Our experiments show that FOMO alleviates robust overfitting by significantly reducing the gap between the best and last robust test accuracy.
arXiv Detail & Related papers (2024-02-18T23:14:40Z) - FLIRT: Feedback Loop In-context Red Teaming [71.38594755628581]
We propose an automatic red teaming framework that evaluates a given model and exposes its vulnerabilities.
Our framework uses in-context learning in a feedback loop to red team models and trigger them into unsafe content generation.
arXiv Detail & Related papers (2023-08-08T14:03:08Z) - SureFED: Robust Federated Learning via Uncertainty-Aware Inward and
Outward Inspection [29.491675102478798]
We introduce SureFED, a novel framework for robust federated learning.
SureFED establishes trust using the local information of benign clients.
We theoretically prove the robustness of our algorithm against data and model poisoning attacks.
arXiv Detail & Related papers (2023-08-04T23:51:05Z) - Doubly Robust Instance-Reweighted Adversarial Training [107.40683655362285]
We propose a novel doubly-robust instance reweighted adversarial framework.
Our importance weights are obtained by optimizing the KL-divergence regularized loss function.
Our proposed approach outperforms related state-of-the-art baseline methods in terms of average robust performance.
arXiv Detail & Related papers (2023-08-01T06:16:18Z) - From Adversarial Arms Race to Model-centric Evaluation: Motivating a
Unified Automatic Robustness Evaluation Framework [91.94389491920309]
Textual adversarial attacks can discover models' weaknesses by adding semantic-preserved but misleading perturbations to the inputs.
The existing practice of robustness evaluation may exhibit issues of incomprehensive evaluation, impractical evaluation protocol, and invalid adversarial samples.
We set up a unified automatic robustness evaluation framework, shifting towards model-centric evaluation to exploit the advantages of adversarial attacks.
arXiv Detail & Related papers (2023-05-29T14:55:20Z) - FROB: Few-shot ROBust Model for Classification and Out-of-Distribution
Detection [0.0]
Few-shot ROBust (FROB) is a model for classification and few-shot OoD detection.
We propose a self-supervised learning few-shot confidence boundary methodology.
FROB achieves competitive performance and outperforms benchmarks in terms of robustness to the few-shot sample population and variability.
arXiv Detail & Related papers (2021-11-30T15:20:44Z) - Enhanced countering adversarial attacks via input denoising and feature
restoring [15.787838084050957]
Deep neural networks (DNNs) are vulnerable to adversarial examples/samples (AEs) with imperceptible perturbations in clean/original samples.
This paper presents an enhanced countering adversarial attack method IDFR (via Input Denoising and Feature Restoring)
The proposed IDFR is made up of an enhanced input denoiser (ID) and a hidden lossy feature restorer (FR) based on the convex hull optimization.
arXiv Detail & Related papers (2021-11-19T07:34:09Z) - RobustBench: a standardized adversarial robustness benchmark [84.50044645539305]
Key challenge in benchmarking robustness is that its evaluation is often error-prone leading to robustness overestimation.
We evaluate adversarial robustness with AutoAttack, an ensemble of white- and black-box attacks.
We analyze the impact of robustness on the performance on distribution shifts, calibration, out-of-distribution detection, fairness, privacy leakage, smoothness, and transferability.
arXiv Detail & Related papers (2020-10-19T17:06:18Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.