Toward Privacy and Utility Preserving Image Representation
- URL: http://arxiv.org/abs/2009.14376v2
- Date: Sat, 17 Oct 2020 16:27:59 GMT
- Title: Toward Privacy and Utility Preserving Image Representation
- Authors: Ahmadreza Mosallanezhad and Yasin N. Silva and Michelle V. Mancenido
and Huan Liu
- Abstract summary: We study the novel problem of creating privacy-preserving image representations with respect to a given utility task.
We propose a principled framework called the Adversarial Image Anonymizer (AIA)
AIA first creates an image representation using a generative model, then enhances the learned image representations using adversarial learning to preserve privacy and utility for a given task.
- Score: 26.768476643200664
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Face images are rich data items that are useful and can easily be collected
in many applications, such as in 1-to-1 face verification tasks in the domain
of security and surveillance systems. Multiple methods have been proposed to
protect an individual's privacy by perturbing the images to remove traces of
identifiable information, such as gender or race. However, significantly less
attention has been given to the problem of protecting images while maintaining
optimal task utility. In this paper, we study the novel problem of creating
privacy-preserving image representations with respect to a given utility task
by proposing a principled framework called the Adversarial Image Anonymizer
(AIA). AIA first creates an image representation using a generative model, then
enhances the learned image representations using adversarial learning to
preserve privacy and utility for a given task. Experiments were conducted on a
publicly available data set to demonstrate the effectiveness of AIA as a
privacy-preserving mechanism for face images.
Related papers
- Privacy-preserving Optics for Enhancing Protection in Face De-identification [60.110274007388135]
We propose a hardware-level face de-identification method to solve this vulnerability.
We also propose an anonymization framework that generates a new face using the privacy-preserving image, face heatmap, and a reference face image from a public dataset as input.
arXiv Detail & Related papers (2024-03-31T19:28:04Z) - Diff-Privacy: Diffusion-based Face Privacy Protection [58.1021066224765]
In this paper, we propose a novel face privacy protection method based on diffusion models, dubbed Diff-Privacy.
Specifically, we train our proposed multi-scale image inversion module (MSI) to obtain a set of SDM format conditional embeddings of the original image.
Based on the conditional embeddings, we design corresponding embedding scheduling strategies and construct different energy functions during the denoising process to achieve anonymization and visual identity information hiding.
arXiv Detail & Related papers (2023-09-11T09:26:07Z) - PRO-Face S: Privacy-preserving Reversible Obfuscation of Face Images via
Secure Flow [69.78820726573935]
We name it PRO-Face S, short for Privacy-preserving Reversible Obfuscation of Face images via Secure flow-based model.
In the framework, an Invertible Neural Network (INN) is utilized to process the input image along with its pre-obfuscated form, and generate the privacy protected image that visually approximates to the pre-obfuscated one.
arXiv Detail & Related papers (2023-07-18T10:55:54Z) - OPOM: Customized Invisible Cloak towards Face Privacy Protection [58.07786010689529]
We investigate the face privacy protection from a technology standpoint based on a new type of customized cloak.
We propose a new method, named one person one mask (OPOM), to generate person-specific (class-wise) universal masks.
The effectiveness of the proposed method is evaluated on both common and celebrity datasets.
arXiv Detail & Related papers (2022-05-24T11:29:37Z) - IdentityDP: Differential Private Identification Protection for Face
Images [17.33916392050051]
Face de-identification, also known as face anonymization, refers to generating another image with similar appearance and the same background, while the real identity is hidden.
We propose IdentityDP, a face anonymization framework that combines a data-driven deep neural network with a differential privacy mechanism.
Our model can effectively obfuscate the identity-related information of faces, preserve significant visual similarity, and generate high-quality images.
arXiv Detail & Related papers (2021-03-02T14:26:00Z) - Privacy-Preserving Image Features via Adversarial Affine Subspace
Embeddings [72.68801373979943]
Many computer vision systems require users to upload image features to the cloud for processing and storage.
We propose a new privacy-preserving feature representation.
Compared to the original features, our approach makes it significantly more difficult for an adversary to recover private information.
arXiv Detail & Related papers (2020-06-11T17:29:48Z) - Towards Face Encryption by Generating Adversarial Identity Masks [53.82211571716117]
We propose a targeted identity-protection iterative method (TIP-IM) to generate adversarial identity masks.
TIP-IM provides 95%+ protection success rate against various state-of-the-art face recognition models.
arXiv Detail & Related papers (2020-03-15T12:45:10Z) - Privacy-Preserving Image Classification in the Local Setting [17.375582978294105]
Local Differential Privacy (LDP) brings us a promising solution, which allows the data owners to randomly perturb their input to provide the plausible deniability of the data before releasing.
In this paper, we consider a two-party image classification problem, in which data owners hold the image and the untrustworthy data user would like to fit a machine learning model with these images as input.
We propose a supervised image feature extractor, DCAConv, which produces an image representation with scalable domain size.
arXiv Detail & Related papers (2020-02-09T01:25:52Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.