Privacy protection based on mask template
- URL: http://arxiv.org/abs/2202.06250v1
- Date: Sun, 13 Feb 2022 08:11:04 GMT
- Title: Privacy protection based on mask template
- Authors: Hao Wang (1), Yu Bai (2), Guangmin Sun (1), Jie Liu (1) ((1) Beijing
University of Technology,(2) Beijing Friendship Hospital)
- Abstract summary: Human biometrics generally exist in images.
In order to avoid disclosure of personal privacy, we should prevent unauthorized recognition algorithms from acquiring real features of the original image.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Powerful recognition algorithms are widely used in the Internet or important
medical systems, which poses a serious threat to personal privacy. Although the
law provides for diversity protection, e.g. The General Data Protection
Regulation (GDPR) in Europe and Articles 1032 to 1039 of the civil code in
China. However, as an important privacy disclosure event, biometric data is
often hidden, which is difficult for the owner to detect and trace to the
source. Human biometrics generally exist in images. In order to avoid the
disclosure of personal privacy, we should prevent unauthorized recognition
algorithms from acquiring the real features of the original image.
Related papers
- Privacy-preserving Optics for Enhancing Protection in Face De-identification [60.110274007388135]
We propose a hardware-level face de-identification method to solve this vulnerability.
We also propose an anonymization framework that generates a new face using the privacy-preserving image, face heatmap, and a reference face image from a public dataset as input.
arXiv Detail & Related papers (2024-03-31T19:28:04Z) - Diff-Privacy: Diffusion-based Face Privacy Protection [58.1021066224765]
In this paper, we propose a novel face privacy protection method based on diffusion models, dubbed Diff-Privacy.
Specifically, we train our proposed multi-scale image inversion module (MSI) to obtain a set of SDM format conditional embeddings of the original image.
Based on the conditional embeddings, we design corresponding embedding scheduling strategies and construct different energy functions during the denoising process to achieve anonymization and visual identity information hiding.
arXiv Detail & Related papers (2023-09-11T09:26:07Z) - Fairly Private: Investigating The Fairness of Visual Privacy
Preservation Algorithms [1.5293427903448025]
This paper investigates the fairness of commonly used visual privacy preservation algorithms.
Experiments on the PubFig dataset clearly show that the privacy protection provided is unequal across groups.
arXiv Detail & Related papers (2023-01-12T13:40:38Z) - Privacy-Preserving Face Recognition with Learnable Privacy Budgets in
Frequency Domain [77.8858706250075]
This paper proposes a privacy-preserving face recognition method using differential privacy in the frequency domain.
Our method performs very well with several classical face recognition test sets.
arXiv Detail & Related papers (2022-07-15T07:15:36Z) - OPOM: Customized Invisible Cloak towards Face Privacy Protection [58.07786010689529]
We investigate the face privacy protection from a technology standpoint based on a new type of customized cloak.
We propose a new method, named one person one mask (OPOM), to generate person-specific (class-wise) universal masks.
The effectiveness of the proposed method is evaluated on both common and celebrity datasets.
arXiv Detail & Related papers (2022-05-24T11:29:37Z) - Does a Face Mask Protect my Privacy?: Deep Learning to Predict Protected
Attributes from Masked Face Images [0.6562256987706128]
We train and apply a CNN based on the ResNet-50 architecture with 20,003 synthetic masked images.
We show that there is no significant difference to privacy invasiveness when a mask is worn.
Our proposed approach can serve as a baseline utility to evaluate the privacy-invasiveness of artificial intelligence systems.
arXiv Detail & Related papers (2021-12-15T04:46:19Z) - FoggySight: A Scheme for Facial Lookup Privacy [8.19666118455293]
We propose and evaluate a solution that applies lessons learned from the adversarial examples literature to modify facial photos in a privacy-preserving manner before they are uploaded to social media.
F FoggySight's core feature is a community protection strategy where users acting as protectors of privacy for others upload decoy photos generated by adversarial machine learning algorithms.
We explore different settings for this scheme and find that it does enable protection of facial privacy -- including against a facial recognition service with unknown internals.
arXiv Detail & Related papers (2020-12-15T19:57:18Z) - "Healthy surveillance": Designing a concept for privacy-preserving mask
recognition AI in the age of pandemics [1.1470070927586016]
In case of CO-19 pandemic in 2020, many governments recommended or even their citizens to wear masks.
Large-scale monitoring of mask recognition requires well-performing Artificial Intelligence.
Our conceptual deep-learning based Artificial Intelligence is able to achieve detection performances between 95% and 99% in a privacy-friendly setting.
arXiv Detail & Related papers (2020-10-20T14:00:04Z) - InfoScrub: Towards Attribute Privacy by Targeted Obfuscation [77.49428268918703]
We study techniques that allow individuals to limit the private information leaked in visual data.
We tackle this problem in a novel image obfuscation framework.
We find our approach generates obfuscated images faithful to the original input images, and additionally increase uncertainty by 6.2$times$ (or up to 0.85 bits) over the non-obfuscated counterparts.
arXiv Detail & Related papers (2020-05-20T19:48:04Z) - Towards Face Encryption by Generating Adversarial Identity Masks [53.82211571716117]
We propose a targeted identity-protection iterative method (TIP-IM) to generate adversarial identity masks.
TIP-IM provides 95%+ protection success rate against various state-of-the-art face recognition models.
arXiv Detail & Related papers (2020-03-15T12:45:10Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.