Am I a Real or Fake Celebrity? Measuring Commercial Face Recognition Web
APIs under Deepfake Impersonation Attack
- URL: http://arxiv.org/abs/2103.00847v2
- Date: Tue, 2 Mar 2021 07:56:46 GMT
- Title: Am I a Real or Fake Celebrity? Measuring Commercial Face Recognition Web
APIs under Deepfake Impersonation Attack
- Authors: Shahroz Tariq, Sowon Jeon, Simon S. Woo
- Abstract summary: We demonstrate how vulnerable face recognition technologies from popular companies are to Deepfake Impersonation (DI) attacks.
We achieve maximum success rates of 78.0% and 99.9% for targeted (i.e., precise match) and non-targeted (i.e., match with any celebrity) attacks.
- Score: 17.97648576135166
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Recently, significant advancements have been made in face recognition
technologies using Deep Neural Networks. As a result, companies such as
Microsoft, Amazon, and Naver offer highly accurate commercial face recognition
web services for diverse applications to meet the end-user needs. Naturally,
however, such technologies are threatened persistently, as virtually any
individual can quickly implement impersonation attacks. In particular, these
attacks can be a significant threat for authentication and identification
services, which heavily rely on their underlying face recognition technologies'
accuracy and robustness. Despite its gravity, the issue regarding deepfake
abuse using commercial web APIs and their robustness has not yet been
thoroughly investigated. This work provides a measurement study on the
robustness of black-box commercial face recognition APIs against Deepfake
Impersonation (DI) attacks using celebrity recognition APIs as an example case
study. We use five deepfake datasets, two of which are created by us and
planned to be released. More specifically, we measure attack performance based
on two scenarios (targeted and non-targeted) and further analyze the differing
system behaviors using fidelity, confidence, and similarity metrics.
Accordingly, we demonstrate how vulnerable face recognition technologies from
popular companies are to DI attack, achieving maximum success rates of 78.0%
and 99.9% for targeted (i.e., precise match) and non-targeted (i.e., match with
any celebrity) attacks, respectively. Moreover, we propose practical defense
strategies to mitigate DI attacks, reducing the attack success rates to as low
as 0% and 0.02% for targeted and non-targeted attacks, respectively.
Related papers
- Rethinking the Threat and Accessibility of Adversarial Attacks against Face Recognition Systems [13.830575255066773]
Face recognition pipelines have been widely deployed in mission-critical systems in trust, equitable and responsible AI applications.
The emergence of adversarial attacks has threatened the security of the entire recognition pipeline.
We propose an effective yet easy-to-launch physical adversarial attack, named AdvColor, against black-box face recognition pipelines.
arXiv Detail & Related papers (2024-07-11T13:58:09Z) - SENet: Visual Detection of Online Social Engineering Attack Campaigns [3.858859576352153]
Social engineering (SE) aims at deceiving users into performing actions that may compromise their security and privacy.
SEShield is a framework for in-browser detection of social engineering attacks.
arXiv Detail & Related papers (2024-01-10T22:25:44Z) - Exploring Decision-based Black-box Attacks on Face Forgery Detection [53.181920529225906]
Face forgery generation technologies generate vivid faces, which have raised public concerns about security and privacy.
Although face forgery detection has successfully distinguished fake faces, recent studies have demonstrated that face forgery detectors are very vulnerable to adversarial examples.
arXiv Detail & Related papers (2023-10-18T14:49:54Z) - SqueezerFaceNet: Reducing a Small Face Recognition CNN Even More Via
Filter Pruning [55.84746218227712]
We develop SqueezerFaceNet, a light face recognition network which less than 1M parameters.
We show that it can be further reduced (up to 40%) without an appreciable loss in performance.
arXiv Detail & Related papers (2023-07-20T08:38:50Z) - Digital and Physical Face Attacks: Reviewing and One Step Further [31.780516471483985]
Face presentation attacks (FPA) have raised pressing mistrust concerns.
Besides physical face attacks, face videos/images are vulnerable to a wide variety of digital attack techniques launched by malicious hackers.
This survey aims to build the integrity of face forensics by providing thorough analyses of existing literature and highlighting the issues requiring further attention.
arXiv Detail & Related papers (2022-09-29T11:25:52Z) - Identification of Attack-Specific Signatures in Adversarial Examples [62.17639067715379]
We show that different attack algorithms produce adversarial examples which are distinct not only in their effectiveness but also in how they qualitatively affect their victims.
Our findings suggest that prospective adversarial attacks should be compared not only via their success rates at fooling models but also via deeper downstream effects they have on victims.
arXiv Detail & Related papers (2021-10-13T15:40:48Z) - LowKey: Leveraging Adversarial Attacks to Protect Social Media Users
from Facial Recognition [46.610361721000444]
We develop our own adversarial filter that accounts for the entire image processing pipeline.
We release an easy-to-use webtool that significantly degrades the accuracy of Amazon Rekognition and the Microsoft Azure Face Recognition API.
arXiv Detail & Related papers (2021-01-20T01:40:06Z) - MixNet for Generalized Face Presentation Attack Detection [63.35297510471997]
We have proposed a deep learning-based network termed as textitMixNet to detect presentation attacks.
The proposed algorithm utilizes state-of-the-art convolutional neural network architectures and learns the feature mapping for each attack category.
arXiv Detail & Related papers (2020-10-25T23:01:13Z) - Towards Face Encryption by Generating Adversarial Identity Masks [53.82211571716117]
We propose a targeted identity-protection iterative method (TIP-IM) to generate adversarial identity masks.
TIP-IM provides 95%+ protection success rate against various state-of-the-art face recognition models.
arXiv Detail & Related papers (2020-03-15T12:45:10Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.