Deep Variational Privacy Funnel: General Modeling with Applications in
Face Recognition
- URL: http://arxiv.org/abs/2401.14792v1
- Date: Fri, 26 Jan 2024 11:32:53 GMT
- Title: Deep Variational Privacy Funnel: General Modeling with Applications in
Face Recognition
- Authors: Behrooz Razeghi, Parsa Rahimi, S\'ebastien Marcel
- Abstract summary: We develop a method for privacy-preserving representation learning using an end-to-end training framework.
We apply our model to state-of-the-art face recognition systems.
- Score: 3.351714665243138
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: In this study, we harness the information-theoretic Privacy Funnel (PF) model
to develop a method for privacy-preserving representation learning using an
end-to-end training framework. We rigorously address the trade-off between
obfuscation and utility. Both are quantified through the logarithmic loss, a
measure also recognized as self-information loss. This exploration deepens the
interplay between information-theoretic privacy and representation learning,
offering substantive insights into data protection mechanisms for both
discriminative and generative models. Importantly, we apply our model to
state-of-the-art face recognition systems. The model demonstrates adaptability
across diverse inputs, from raw facial images to both derived or refined
embeddings, and is competent in tasks such as classification, reconstruction,
and generation.
Related papers
- Towards Reliable Verification of Unauthorized Data Usage in Personalized Text-to-Image Diffusion Models [23.09033991200197]
New personalization techniques have been proposed to customize the pre-trained base models for crafting images with specific themes or styles.
Such a lightweight solution poses a new concern regarding whether the personalized models are trained from unauthorized data.
We introduce SIREN, a novel methodology to proactively trace unauthorized data usage in black-box personalized text-to-image diffusion models.
arXiv Detail & Related papers (2024-10-14T12:29:23Z) - Enhancing User-Centric Privacy Protection: An Interactive Framework through Diffusion Models and Machine Unlearning [54.30994558765057]
The study pioneers a comprehensive privacy protection framework that safeguards image data privacy concurrently during data sharing and model publication.
We propose an interactive image privacy protection framework that utilizes generative machine learning models to modify image information at the attribute level.
Within this framework, we instantiate two modules: a differential privacy diffusion model for protecting attribute information in images and a feature unlearning algorithm for efficient updates of the trained model on the revised image dataset.
arXiv Detail & Related papers (2024-09-05T07:55:55Z) - Extracting Training Data from Document-Based VQA Models [67.1470112451617]
Vision-Language Models (VLMs) have made remarkable progress in document-based Visual Question Answering (i.e., responding to queries about the contents of an input document provided as an image)
We show these models can memorise responses for training samples and regurgitate them even when the relevant visual information has been removed.
This includes Personal Identifiable Information repeated once in the training set, indicating these models could divulge sensitive information and therefore pose a privacy risk.
arXiv Detail & Related papers (2024-07-11T17:44:41Z) - Deep Privacy Funnel Model: From a Discriminative to a Generative Approach with an Application to Face Recognition [20.562833106966405]
We apply the information-theoretic Privacy Funnel (PF) model to the domain of face recognition.
We develop a novel method for privacy-preserving representation learning within an end-to-end training framework.
arXiv Detail & Related papers (2024-04-03T12:50:45Z) - Revisiting Self-supervised Learning of Speech Representation from a
Mutual Information Perspective [68.20531518525273]
We take a closer look into existing self-supervised methods of speech from an information-theoretic perspective.
We use linear probes to estimate the mutual information between the target information and learned representations.
We explore the potential of evaluating representations in a self-supervised fashion, where we estimate the mutual information between different parts of the data without using any labels.
arXiv Detail & Related papers (2024-01-16T21:13:22Z) - Masked Modeling for Self-supervised Representation Learning on Vision
and Beyond [69.64364187449773]
Masked modeling has emerged as a distinctive approach that involves predicting parts of the original data that are proportionally masked during training.
We elaborate on the details of techniques within masked modeling, including diverse masking strategies, recovering targets, network architectures, and more.
We conclude by discussing the limitations of current techniques and point out several potential avenues for advancing masked modeling research.
arXiv Detail & Related papers (2023-12-31T12:03:21Z) - Segue: Side-information Guided Generative Unlearnable Examples for
Facial Privacy Protection in Real World [64.4289385463226]
We propose Segue: Side-information guided generative unlearnable examples.
To improve transferability, we introduce side information such as true labels and pseudo labels.
It can resist JPEG compression, adversarial training, and some standard data augmentations.
arXiv Detail & Related papers (2023-10-24T06:22:37Z) - Diff-Privacy: Diffusion-based Face Privacy Protection [58.1021066224765]
In this paper, we propose a novel face privacy protection method based on diffusion models, dubbed Diff-Privacy.
Specifically, we train our proposed multi-scale image inversion module (MSI) to obtain a set of SDM format conditional embeddings of the original image.
Based on the conditional embeddings, we design corresponding embedding scheduling strategies and construct different energy functions during the denoising process to achieve anonymization and visual identity information hiding.
arXiv Detail & Related papers (2023-09-11T09:26:07Z) - Training face verification models from generated face identity data [2.557825816851682]
We consider an approach to increase the privacy protection of data sets, as applied to face recognition.
We build on the StyleGAN generative adversarial network and feed it with latent codes combining two distinct sub-codes.
We find that the addition of a small amount of private data greatly improves the performance of our model.
arXiv Detail & Related papers (2021-08-02T12:00:01Z) - Privacy-Preserving Eye-tracking Using Deep Learning [1.5484595752241124]
In this study, we focus on the case of a deep network model trained on images of individual faces.
In this study, it is showed that the named model preserves the integrity of training data with reasonable confidence.
arXiv Detail & Related papers (2021-06-17T15:58:01Z) - Why Should I Trust a Model is Private? Using Shifts in Model Explanation
for Evaluating Privacy-Preserving Emotion Recognition Model [35.016050900061]
We focus on using interpretable methods to evaluate a model's efficacy to preserve privacy with respect to sensitive variables.
We show how certain commonly-used methods that seek to preserve privacy might not align with human perception of privacy preservation.
We conduct crowdsourcing experiments to evaluate the inclination of the evaluators to choose a particular model for a given task.
arXiv Detail & Related papers (2021-04-18T09:56:41Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.