Unsupervised Enhancement of Soft-biometric Privacy with Negative Face
Recognition
- URL: http://arxiv.org/abs/2002.09181v1
- Date: Fri, 21 Feb 2020 08:37:16 GMT
- Title: Unsupervised Enhancement of Soft-biometric Privacy with Negative Face
Recognition
- Authors: Philipp Terh\"orst, Marco Huber, Naser Damer, Florian Kirchbuchner,
Arjan Kuijper
- Abstract summary: We present Negative Face Recognition (NFR), a novel face recognition approach that enhances the soft-biometric privacy on the template-level.
Our approach does not require privacy-sensitive labels and offers a more comprehensive privacy-protection not limited to pre-defined attributes.
- Score: 13.555831336280407
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Current research on soft-biometrics showed that privacy-sensitive information
can be deduced from biometric templates of an individual. Since for many
applications, these templates are expected to be used for recognition purposes
only, this raises major privacy issues. Previous works focused on supervised
privacy-enhancing solutions that require privacy-sensitive information about
individuals and limit their application to the suppression of single and
pre-defined attributes. Consequently, they do not take into account attributes
that are not considered in the training. In this work, we present Negative Face
Recognition (NFR), a novel face recognition approach that enhances the
soft-biometric privacy on the template-level by representing face templates in
a complementary (negative) domain. While ordinary templates characterize facial
properties of an individual, negative templates describe facial properties that
does not exist for this individual. This suppresses privacy-sensitive
information from stored templates. Experiments are conducted on two publicly
available datasets captured under controlled and uncontrolled scenarios on
three privacy-sensitive attributes. The experiments demonstrate that our
proposed approach reaches higher suppression rates than previous work, while
maintaining higher recognition performances as well. Unlike previous works, our
approach does not require privacy-sensitive labels and offers a more
comprehensive privacy-protection not limited to pre-defined attributes.
Related papers
- PAC Privacy Preserving Diffusion Models [6.299952353968428]
Diffusion models can produce images with both high privacy and visual quality.
However, challenges arise such as in ensuring robust protection in privatizing specific data attributes.
We introduce the PAC Privacy Preserving Diffusion Model, a model leverages diffusion principles and ensure Probably Approximately Correct (PAC) privacy.
arXiv Detail & Related papers (2023-12-02T18:42:52Z) - Diff-Privacy: Diffusion-based Face Privacy Protection [58.1021066224765]
In this paper, we propose a novel face privacy protection method based on diffusion models, dubbed Diff-Privacy.
Specifically, we train our proposed multi-scale image inversion module (MSI) to obtain a set of SDM format conditional embeddings of the original image.
Based on the conditional embeddings, we design corresponding embedding scheduling strategies and construct different energy functions during the denoising process to achieve anonymization and visual identity information hiding.
arXiv Detail & Related papers (2023-09-11T09:26:07Z) - TeD-SPAD: Temporal Distinctiveness for Self-supervised
Privacy-preservation for video Anomaly Detection [59.04634695294402]
Video anomaly detection (VAD) without human monitoring is a complex computer vision task.
Privacy leakage in VAD allows models to pick up and amplify unnecessary biases related to people's personal information.
We propose TeD-SPAD, a privacy-aware video anomaly detection framework that destroys visual private information in a self-supervised manner.
arXiv Detail & Related papers (2023-08-21T22:42:55Z) - How Do Input Attributes Impact the Privacy Loss in Differential Privacy? [55.492422758737575]
We study the connection between the per-subject norm in DP neural networks and individual privacy loss.
We introduce a novel metric termed the Privacy Loss-Input Susceptibility (PLIS) which allows one to apportion the subject's privacy loss to their input attributes.
arXiv Detail & Related papers (2022-11-18T11:39:03Z) - PrivacyProber: Assessment and Detection of Soft-Biometric
Privacy-Enhancing Techniques [1.790445868185437]
We study the robustness of several state-of-the-art soft-biometric privacy-enhancing techniques to attribute recovery attempts.
We propose PrivacyProber, a high-level framework for restoring soft-biometric information from privacy-enhanced facial images.
arXiv Detail & Related papers (2022-11-16T12:20:18Z) - Algorithms with More Granular Differential Privacy Guarantees [65.3684804101664]
We consider partial differential privacy (DP), which allows quantifying the privacy guarantee on a per-attribute basis.
In this work, we study several basic data analysis and learning tasks, and design algorithms whose per-attribute privacy parameter is smaller that the best possible privacy parameter for the entire record of a person.
arXiv Detail & Related papers (2022-09-08T22:43:50Z) - OPOM: Customized Invisible Cloak towards Face Privacy Protection [58.07786010689529]
We investigate the face privacy protection from a technology standpoint based on a new type of customized cloak.
We propose a new method, named one person one mask (OPOM), to generate person-specific (class-wise) universal masks.
The effectiveness of the proposed method is evaluated on both common and celebrity datasets.
arXiv Detail & Related papers (2022-05-24T11:29:37Z) - SPAct: Self-supervised Privacy Preservation for Action Recognition [73.79886509500409]
Existing approaches for mitigating privacy leakage in action recognition require privacy labels along with the action labels from the video dataset.
Recent developments of self-supervised learning (SSL) have unleashed the untapped potential of the unlabeled data.
We present a novel training framework which removes privacy information from input video in a self-supervised manner without requiring privacy labels.
arXiv Detail & Related papers (2022-03-29T02:56:40Z) - An Attack on Feature Level-based Facial Soft-biometric Privacy
Enhancement [13.780253190395715]
We introduce an attack on feature level-based facial soft-biometric privacy-enhancement techniques.
It is able to circumvent the privacy enhancement to a considerable degree and is able to correctly classify gender with an accuracy of up to approximately 90%.
arXiv Detail & Related papers (2021-11-24T10:41:15Z) - Subverting Privacy-Preserving GANs: Hiding Secrets in Sanitized Images [13.690485523871855]
State-of-the-art approaches use privacy-preserving generative adversarial networks (PP-GANs) to enable reliable facial expression recognition without leaking users' identity.
We show that it is possible to hide the sensitive identification data in the sanitized output images of such PP-GANs for later extraction.
arXiv Detail & Related papers (2020-09-19T19:02:17Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.