LowKey: Leveraging Adversarial Attacks to Protect Social Media Users
from Facial Recognition
- URL: http://arxiv.org/abs/2101.07922v2
- Date: Mon, 25 Jan 2021 04:23:22 GMT
- Title: LowKey: Leveraging Adversarial Attacks to Protect Social Media Users
from Facial Recognition
- Authors: Valeriia Cherepanova, Micah Goldblum, Harrison Foley, Shiyuan Duan,
John Dickerson, Gavin Taylor, Tom Goldstein
- Abstract summary: We develop our own adversarial filter that accounts for the entire image processing pipeline.
We release an easy-to-use webtool that significantly degrades the accuracy of Amazon Rekognition and the Microsoft Azure Face Recognition API.
- Score: 46.610361721000444
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Facial recognition systems are increasingly deployed by private corporations,
government agencies, and contractors for consumer services and mass
surveillance programs alike. These systems are typically built by scraping
social media profiles for user images. Adversarial perturbations have been
proposed for bypassing facial recognition systems. However, existing methods
fail on full-scale systems and commercial APIs. We develop our own adversarial
filter that accounts for the entire image processing pipeline and is
demonstrably effective against industrial-grade pipelines that include face
detection and large scale databases. Additionally, we release an easy-to-use
webtool that significantly degrades the accuracy of Amazon Rekognition and the
Microsoft Azure Face Recognition API, reducing the accuracy of each to below
1%.
Related papers
- A Machine Learning-Based Secure Face Verification Scheme and Its Applications to Digital Surveillance [0.9208007322096533]
Most real-world recognition systems ignore the importance of protecting the identity-sensitive facial images that are used for verification.
We use the DeepID2 convolutional neural network to extract the features of a facial image and an EM algorithm to solve the facial verification problem.
We develop three face verification systems for surveillance (or entrance) control of a local community based on three levels of privacy concerns.
arXiv Detail & Related papers (2024-10-29T12:25:00Z) - Privacy-preserving Optics for Enhancing Protection in Face De-identification [60.110274007388135]
We propose a hardware-level face de-identification method to solve this vulnerability.
We also propose an anonymization framework that generates a new face using the privacy-preserving image, face heatmap, and a reference face image from a public dataset as input.
arXiv Detail & Related papers (2024-03-31T19:28:04Z) - Exploring Decision-based Black-box Attacks on Face Forgery Detection [53.181920529225906]
Face forgery generation technologies generate vivid faces, which have raised public concerns about security and privacy.
Although face forgery detection has successfully distinguished fake faces, recent studies have demonstrated that face forgery detectors are very vulnerable to adversarial examples.
arXiv Detail & Related papers (2023-10-18T14:49:54Z) - Face Presentation Attack Detection [59.05779913403134]
Face recognition technology has been widely used in daily interactive applications such as checking-in and mobile payment.
However, its vulnerability to presentation attacks (PAs) limits its reliable use in ultra-secure applicational scenarios.
arXiv Detail & Related papers (2022-12-07T14:51:17Z) - Privacy-Preserving Face Recognition with Learnable Privacy Budgets in
Frequency Domain [77.8858706250075]
This paper proposes a privacy-preserving face recognition method using differential privacy in the frequency domain.
Our method performs very well with several classical face recognition test sets.
arXiv Detail & Related papers (2022-07-15T07:15:36Z) - Fairness Properties of Face Recognition and Obfuscation Systems [19.195705814819306]
Face obfuscation systems generate imperceptible perturbations, when added to an image, cause the facial recognition system to misidentify the user.
This dependence of face obfuscation on metric embedding networks, which are known to be unfair in the context of facial recognition, surfaces the question of demographic fairness.
We find that metric embedding networks are demographically aware; they cluster faces in the embedding space based on their demographic attributes.
arXiv Detail & Related papers (2021-08-05T16:18:15Z) - Am I a Real or Fake Celebrity? Measuring Commercial Face Recognition Web
APIs under Deepfake Impersonation Attack [17.97648576135166]
We demonstrate how vulnerable face recognition technologies from popular companies are to Deepfake Impersonation (DI) attacks.
We achieve maximum success rates of 78.0% and 99.9% for targeted (i.e., precise match) and non-targeted (i.e., match with any celebrity) attacks.
arXiv Detail & Related papers (2021-03-01T08:40:10Z) - FoggySight: A Scheme for Facial Lookup Privacy [8.19666118455293]
We propose and evaluate a solution that applies lessons learned from the adversarial examples literature to modify facial photos in a privacy-preserving manner before they are uploaded to social media.
F FoggySight's core feature is a community protection strategy where users acting as protectors of privacy for others upload decoy photos generated by adversarial machine learning algorithms.
We explore different settings for this scheme and find that it does enable protection of facial privacy -- including against a facial recognition service with unknown internals.
arXiv Detail & Related papers (2020-12-15T19:57:18Z) - Towards Face Encryption by Generating Adversarial Identity Masks [53.82211571716117]
We propose a targeted identity-protection iterative method (TIP-IM) to generate adversarial identity masks.
TIP-IM provides 95%+ protection success rate against various state-of-the-art face recognition models.
arXiv Detail & Related papers (2020-03-15T12:45:10Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.