Subverting Privacy-Preserving GANs: Hiding Secrets in Sanitized Images
- URL: http://arxiv.org/abs/2009.09283v1
- Date: Sat, 19 Sep 2020 19:02:17 GMT
- Title: Subverting Privacy-Preserving GANs: Hiding Secrets in Sanitized Images
- Authors: Kang Liu, Benjamin Tan, Siddharth Garg
- Abstract summary: State-of-the-art approaches use privacy-preserving generative adversarial networks (PP-GANs) to enable reliable facial expression recognition without leaking users' identity.
We show that it is possible to hide the sensitive identification data in the sanitized output images of such PP-GANs for later extraction.
- Score: 13.690485523871855
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Unprecedented data collection and sharing have exacerbated privacy concerns
and led to increasing interest in privacy-preserving tools that remove
sensitive attributes from images while maintaining useful information for other
tasks. Currently, state-of-the-art approaches use privacy-preserving generative
adversarial networks (PP-GANs) for this purpose, for instance, to enable
reliable facial expression recognition without leaking users' identity.
However, PP-GANs do not offer formal proofs of privacy and instead rely on
experimentally measuring information leakage using classification accuracy on
the sensitive attributes of deep learning (DL)-based discriminators. In this
work, we question the rigor of such checks by subverting existing
privacy-preserving GANs for facial expression recognition. We show that it is
possible to hide the sensitive identification data in the sanitized output
images of such PP-GANs for later extraction, which can even allow for
reconstruction of the entire input images, while satisfying privacy checks. We
demonstrate our approach via a PP-GAN-based architecture and provide
qualitative and quantitative evaluations using two public datasets. Our
experimental results raise fundamental questions about the need for more
rigorous privacy checks of PP-GANs, and we provide insights into the social
impact of these.
Related papers
- SHAN: Object-Level Privacy Detection via Inference on Scene Heterogeneous Graph [5.050631286347773]
Privacy object detection aims to accurately locate private objects in images.
Existing methods suffer from serious deficiencies in accuracy, generalization, and interpretability.
We propose SHAN, Scene Heterogeneous graph Attention Network, a model constructs a scene heterogeneous graph from an image.
arXiv Detail & Related papers (2024-03-14T08:32:14Z) - Diff-Privacy: Diffusion-based Face Privacy Protection [58.1021066224765]
In this paper, we propose a novel face privacy protection method based on diffusion models, dubbed Diff-Privacy.
Specifically, we train our proposed multi-scale image inversion module (MSI) to obtain a set of SDM format conditional embeddings of the original image.
Based on the conditional embeddings, we design corresponding embedding scheduling strategies and construct different energy functions during the denoising process to achieve anonymization and visual identity information hiding.
arXiv Detail & Related papers (2023-09-11T09:26:07Z) - Attribute-preserving Face Dataset Anonymization via Latent Code
Optimization [64.4569739006591]
We present a task-agnostic anonymization procedure that directly optimize the images' latent representation in the latent space of a pre-trained GAN.
We demonstrate through a series of experiments that our method is capable of anonymizing the identity of the images whilst -- crucially -- better-preserving the facial attributes.
arXiv Detail & Related papers (2023-03-20T17:34:05Z) - How Do Input Attributes Impact the Privacy Loss in Differential Privacy? [55.492422758737575]
We study the connection between the per-subject norm in DP neural networks and individual privacy loss.
We introduce a novel metric termed the Privacy Loss-Input Susceptibility (PLIS) which allows one to apportion the subject's privacy loss to their input attributes.
arXiv Detail & Related papers (2022-11-18T11:39:03Z) - PrivacyProber: Assessment and Detection of Soft-Biometric
Privacy-Enhancing Techniques [1.790445868185437]
We study the robustness of several state-of-the-art soft-biometric privacy-enhancing techniques to attribute recovery attempts.
We propose PrivacyProber, a high-level framework for restoring soft-biometric information from privacy-enhanced facial images.
arXiv Detail & Related papers (2022-11-16T12:20:18Z) - Privacy-Preserving Face Recognition with Learnable Privacy Budgets in
Frequency Domain [77.8858706250075]
This paper proposes a privacy-preserving face recognition method using differential privacy in the frequency domain.
Our method performs very well with several classical face recognition test sets.
arXiv Detail & Related papers (2022-07-15T07:15:36Z) - Partial sensitivity analysis in differential privacy [58.730520380312676]
We investigate the impact of each input feature on the individual's privacy loss.
We experimentally evaluate our approach on queries over private databases.
We also explore our findings in the context of neural network training on synthetic data.
arXiv Detail & Related papers (2021-09-22T08:29:16Z) - Privacy-Preserving Image Features via Adversarial Affine Subspace
Embeddings [72.68801373979943]
Many computer vision systems require users to upload image features to the cloud for processing and storage.
We propose a new privacy-preserving feature representation.
Compared to the original features, our approach makes it significantly more difficult for an adversary to recover private information.
arXiv Detail & Related papers (2020-06-11T17:29:48Z) - Unsupervised Enhancement of Soft-biometric Privacy with Negative Face
Recognition [13.555831336280407]
We present Negative Face Recognition (NFR), a novel face recognition approach that enhances the soft-biometric privacy on the template-level.
Our approach does not require privacy-sensitive labels and offers a more comprehensive privacy-protection not limited to pre-defined attributes.
arXiv Detail & Related papers (2020-02-21T08:37:16Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.