Simulated Adversarial Testing of Face Recognition Models
- URL: http://arxiv.org/abs/2106.04569v1
- Date: Tue, 8 Jun 2021 17:58:10 GMT
- Title: Simulated Adversarial Testing of Face Recognition Models
- Authors: Nataniel Ruiz, Adam Kortylewski, Weichao Qiu, Cihang Xie, Sarah Adel
Bargal, Alan Yuille, Stan Sclaroff
- Abstract summary: We propose a framework for learning how to test machine learning algorithms using simulators in an adversarial manner.
We are the first to show that weaknesses of models trained on real data can be discovered using simulated samples.
- Score: 53.10078734154151
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Most machine learning models are validated and tested on fixed datasets. This
can give an incomplete picture of the capabilities and weaknesses of the model.
Such weaknesses can be revealed at test time in the real world. The risks
involved in such failures can be loss of profits, loss of time or even loss of
life in certain critical applications. In order to alleviate this issue,
simulators can be controlled in a fine-grained manner using interpretable
parameters to explore the semantic image manifold. In this work, we propose a
framework for learning how to test machine learning algorithms using simulators
in an adversarial manner in order to find weaknesses in the model before
deploying it in critical scenarios. We apply this model in a face recognition
scenario. We are the first to show that weaknesses of models trained on real
data can be discovered using simulated samples. Using our proposed method, we
can find adversarial synthetic faces that fool contemporary face recognition
models. This demonstrates the fact that these models have weaknesses that are
not measured by commonly used validation datasets. We hypothesize that this
type of adversarial examples are not isolated, but usually lie in connected
components in the latent space of the simulator. We present a method to find
these adversarial regions as opposed to the typical adversarial points found in
the adversarial example literature.
Related papers
- A Training Rate and Survival Heuristic for Inference and Robustness Evaluation (TRASHFIRE) [1.622320874892682]
This work addresses the problem of understanding and predicting how particular model hyper- parameters influence the performance of a model in the presence of an adversary.
The proposed approach uses survival models, worst-case examples, and a cost-aware analysis to precisely and accurately reject a particular model change.
Using the proposed methodology, we show that ResNet is hopelessly against even the simplest of white box attacks.
arXiv Detail & Related papers (2024-01-24T19:12:37Z) - Learning Defect Prediction from Unrealistic Data [57.53586547895278]
Pretrained models of code have become popular choices for code understanding and generation tasks.
Such models tend to be large and require commensurate volumes of training data.
It has become popular to train models with far larger but less realistic datasets, such as functions with artificially injected bugs.
Models trained on such data tend to only perform well on similar data, while underperforming on real world programs.
arXiv Detail & Related papers (2023-11-02T01:51:43Z) - Learning Hybrid Dynamics Models With Simulator-Informed Latent States [7.801959219897031]
We propose a new approach to hybrid modeling, where we inform the latent states of a learned model via a simulator.
This allows to control the predictions via the simulator preventing them from accumulating errors.
In our learning-based setting, we jointly learn the dynamics and an observer that infers the latent states via the simulator.
arXiv Detail & Related papers (2023-09-06T09:57:58Z) - Metamorphic Testing-based Adversarial Attack to Fool Deepfake Detectors [2.0649235321315285]
There is a dire need for deepfake detection technology to help spot deepfake media.
Current deepfake detection models are able to achieve outstanding accuracy (>90%)
This study identifies makeup application as an adversarial attack that could fool deepfake detectors.
arXiv Detail & Related papers (2022-04-19T02:24:30Z) - Smoothed Embeddings for Certified Few-Shot Learning [63.68667303948808]
We extend randomized smoothing to few-shot learning models that map inputs to normalized embeddings.
Our results are confirmed by experiments on different datasets.
arXiv Detail & Related papers (2022-02-02T18:19:04Z) - Probabilistic Modeling for Human Mesh Recovery [73.11532990173441]
This paper focuses on the problem of 3D human reconstruction from 2D evidence.
We recast the problem as learning a mapping from the input to a distribution of plausible 3D poses.
arXiv Detail & Related papers (2021-08-26T17:55:11Z) - Detecting Anomalies in Semantic Segmentation with Prototypes [23.999211737485812]
We propose to address anomaly segmentation through prototype learning.
Our approach achieves the new state of the art, with a significant margin over previous works.
arXiv Detail & Related papers (2021-06-01T13:22:33Z) - On the Transferability of Adversarial Attacksagainst Neural Text
Classifier [121.6758865857686]
We investigate the transferability of adversarial examples for text classification models.
We propose a genetic algorithm to find an ensemble of models that can induce adversarial examples to fool almost all existing models.
We derive word replacement rules that can be used for model diagnostics from these adversarial examples.
arXiv Detail & Related papers (2020-11-17T10:45:05Z) - Understanding Classifier Mistakes with Generative Models [88.20470690631372]
Deep neural networks are effective on supervised learning tasks, but have been shown to be brittle.
In this paper, we leverage generative models to identify and characterize instances where classifiers fail to generalize.
Our approach is agnostic to class labels from the training set which makes it applicable to models trained in a semi-supervised way.
arXiv Detail & Related papers (2020-10-05T22:13:21Z) - Trade-offs between membership privacy & adversarially robust learning [13.37805637358556]
We identify settings where standard models will overfit to a larger extent in comparison to robust models.
The degree of overfitting naturally depends on the amount of data available for training.
arXiv Detail & Related papers (2020-06-08T14:20:12Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.