Assessing Privacy Risks from Feature Vector Reconstruction Attacks
- URL: http://arxiv.org/abs/2202.05760v1
- Date: Fri, 11 Feb 2022 16:52:02 GMT
- Title: Assessing Privacy Risks from Feature Vector Reconstruction Attacks
- Authors: Emily Wenger, Francesca Falzon, Josephine Passananti, Haitao Zheng,
Ben Y. Zhao
- Abstract summary: We develop metrics that meaningfully capture the threat of reconstructed face images.
We show that reconstructed face images enable re-identification by both commercial facial recognition systems and humans.
Our results confirm that feature vectors should be recognized as Personal Identifiable Information.
- Score: 24.262351521060676
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: In deep neural networks for facial recognition, feature vectors are numerical
representations that capture the unique features of a given face. While it is
known that a version of the original face can be recovered via "feature
reconstruction," we lack an understanding of the end-to-end privacy risks
produced by these attacks. In this work, we address this shortcoming by
developing metrics that meaningfully capture the threat of reconstructed face
images. Using end-to-end experiments and user studies, we show that
reconstructed face images enable re-identification by both commercial facial
recognition systems and humans, at a rate that is at worst, a factor of four
times higher than randomized baselines. Our results confirm that feature
vectors should be recognized as Personal Identifiable Information (PII) in
order to protect user privacy.
Related papers
- Privacy-preserving Optics for Enhancing Protection in Face De-identification [60.110274007388135]
We propose a hardware-level face de-identification method to solve this vulnerability.
We also propose an anonymization framework that generates a new face using the privacy-preserving image, face heatmap, and a reference face image from a public dataset as input.
arXiv Detail & Related papers (2024-03-31T19:28:04Z) - Privacy-Preserving Face Recognition Using Trainable Feature Subtraction [40.47645421424354]
Face recognition has led to increasing privacy concerns.
This paper explores face image protection against viewing and recovery attacks.
We distill our methodologies into a novel privacy-preserving face recognition method, MinusFace.
arXiv Detail & Related papers (2024-03-19T05:27:52Z) - Exploring Decision-based Black-box Attacks on Face Forgery Detection [53.181920529225906]
Face forgery generation technologies generate vivid faces, which have raised public concerns about security and privacy.
Although face forgery detection has successfully distinguished fake faces, recent studies have demonstrated that face forgery detectors are very vulnerable to adversarial examples.
arXiv Detail & Related papers (2023-10-18T14:49:54Z) - Privacy-Preserving Face Recognition Using Random Frequency Components [46.95003101593304]
Face recognition has sparked increasing privacy concerns.
We propose to conceal visual information by pruning human-perceivable low-frequency components.
We distill our findings into a novel privacy-preserving face recognition method, PartialFace.
arXiv Detail & Related papers (2023-08-21T04:31:02Z) - CLIP2Protect: Protecting Facial Privacy using Text-Guided Makeup via
Adversarial Latent Search [10.16904417057085]
Deep learning based face recognition systems can enable unauthorized tracking of users in the digital world.
Existing methods for enhancing privacy fail to generate naturalistic images that can protect facial privacy without compromising user experience.
We propose a novel two-step approach for facial privacy protection that relies on finding adversarial latent codes in the low-dimensional manifold of a pretrained generative model.
arXiv Detail & Related papers (2023-06-16T17:58:15Z) - Privacy-preserving Adversarial Facial Features [31.885215405010687]
We propose an adversarial features-based face privacy protection approach to generate privacy-preserving adversarial features.
We show that AdvFace outperforms the state-of-the-art face privacy-preserving methods in defending against reconstruction attacks.
arXiv Detail & Related papers (2023-05-08T08:52:08Z) - FaceMAE: Privacy-Preserving Face Recognition via Masked Autoencoders [81.21440457805932]
We propose a novel framework FaceMAE, where the face privacy and recognition performance are considered simultaneously.
randomly masked face images are used to train the reconstruction module in FaceMAE.
We also perform sufficient privacy-preserving face recognition on several public face datasets.
arXiv Detail & Related papers (2022-05-23T07:19:42Z) - Vulnerability of Face Recognition Systems Against Composite Face
Reconstruction Attack [3.3707422585608953]
Rounding confidence score is considered trivial but a simple and effective countermeasure to stop gradient descent based image reconstruction attacks.
In this paper, we prove that, the face reconstruction attacks based on composite faces can reveal the inefficiency of rounding policy as countermeasure.
arXiv Detail & Related papers (2020-08-23T03:37:51Z) - Black-Box Face Recovery from Identity Features [61.950765357647605]
We attack the state-of-the-art face recognition system (ArcFace) to test our algorithm.
Our algorithm requires a significantly less number of queries compared to the state-of-the-art solution.
arXiv Detail & Related papers (2020-07-27T15:25:38Z) - Towards Face Encryption by Generating Adversarial Identity Masks [53.82211571716117]
We propose a targeted identity-protection iterative method (TIP-IM) to generate adversarial identity masks.
TIP-IM provides 95%+ protection success rate against various state-of-the-art face recognition models.
arXiv Detail & Related papers (2020-03-15T12:45:10Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.