Privacy Safe Representation Learning via Frequency Filtering Encoder
- URL: http://arxiv.org/abs/2208.02482v1
- Date: Thu, 4 Aug 2022 06:16:13 GMT
- Title: Privacy Safe Representation Learning via Frequency Filtering Encoder
- Authors: Jonghu Jeong, Minyong Cho, Philipp Benz, Jinwoo Hwang, Jeewook Kim,
Seungkwan Lee, Tae-hoon Kim
- Abstract summary: Adversarial Representation Learning (ARL) is a common approach to train an encoder that runs on the client-side and obfuscates an image.
It is assumed, that the obfuscated image can safely be transmitted and used for the task on the server without privacy concerns.
We introduce a novel ARL method enhanced through low-pass filtering, limiting the available information amount to be encoded in the frequency domain.
- Score: 7.792424517008007
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Deep learning models are increasingly deployed in real-world applications.
These models are often deployed on the server-side and receive user data in an
information-rich representation to solve a specific task, such as image
classification. Since images can contain sensitive information, which users
might not be willing to share, privacy protection becomes increasingly
important. Adversarial Representation Learning (ARL) is a common approach to
train an encoder that runs on the client-side and obfuscates an image. It is
assumed, that the obfuscated image can safely be transmitted and used for the
task on the server without privacy concerns. However, in this work, we find
that training a reconstruction attacker can successfully recover the original
image of existing ARL methods. To this end, we introduce a novel ARL method
enhanced through low-pass filtering, limiting the available information amount
to be encoded in the frequency domain. Our experimental results reveal that our
approach withstands reconstruction attacks while outperforming previous
state-of-the-art methods regarding the privacy-utility trade-off. We further
conduct a user study to qualitatively assess our defense of the reconstruction
attack.
Related papers
- Exploring User-level Gradient Inversion with a Diffusion Prior [17.2657358645072]
We propose a novel gradient inversion attack that applies a denoising diffusion model as a strong image prior to enhance recovery in the large batch setting.
Unlike traditional attacks, which aim to reconstruct individual samples and suffer at large batch and image sizes, our approach instead aims to recover a representative image that captures the sensitive shared semantic information corresponding to the underlying user.
arXiv Detail & Related papers (2024-09-11T14:20:47Z) - Unveiling Hidden Visual Information: A Reconstruction Attack Against Adversarial Visual Information Hiding [6.649753747542211]
A representative image encryption method is the adversarial visual information hiding (AVIH)
In the AVIH method, the type-I adversarial example approach creates images that appear completely different but are still recognized by machines as the original ones.
We introduce a dual-strategy DR attack against the AVIH encryption method by incorporating generative-adversarial loss and (2) augmented identity loss.
arXiv Detail & Related papers (2024-08-08T06:58:48Z) - Attack GAN (AGAN ): A new Security Evaluation Tool for Perceptual Encryption [1.6385815610837167]
Training state-of-the-art (SOTA) deep learning models requires a large amount of data.
Perceptional encryption converts images into an unrecognizable format to protect the sensitive visual information in the training data.
This comes at the cost of a significant reduction in the accuracy of the models.
Adversarial Visual Information Hiding (AV IH) overcomes this drawback to protect image privacy by attempting to create encrypted images that are unrecognizable to the human eye.
arXiv Detail & Related papers (2024-07-09T06:03:32Z) - Federated Face Forgery Detection Learning with Personalized Representation [63.90408023506508]
Deep generator technology can produce high-quality fake videos that are indistinguishable, posing a serious social threat.
Traditional forgery detection methods directly centralized training on data.
The paper proposes a novel federated face forgery detection learning with personalized representation.
arXiv Detail & Related papers (2024-06-17T02:20:30Z) - A Stealthy Wrongdoer: Feature-Oriented Reconstruction Attack against Split Learning [14.110303634976272]
Split Learning (SL) is a distributed learning framework renowned for its privacy-preserving features and minimal computational requirements.
Previous research consistently highlights the potential privacy breaches in SL systems by server adversaries reconstructing training data.
This paper introduces a new semi-honest Data Reconstruction Attack on SL, named Feature-Oriented Reconstruction Attack (FORA)
arXiv Detail & Related papers (2024-05-07T08:38:35Z) - Privacy-preserving Optics for Enhancing Protection in Face De-identification [60.110274007388135]
We propose a hardware-level face de-identification method to solve this vulnerability.
We also propose an anonymization framework that generates a new face using the privacy-preserving image, face heatmap, and a reference face image from a public dataset as input.
arXiv Detail & Related papers (2024-03-31T19:28:04Z) - Recoverable Privacy-Preserving Image Classification through Noise-like
Adversarial Examples [26.026171363346975]
Cloud-based image related services such as classification have become crucial.
In this study, we propose a novel privacypreserving image classification scheme.
encrypted images can be decrypted back into their original form with high fidelity (recoverable) using a secret key.
arXiv Detail & Related papers (2023-10-19T13:01:58Z) - Client-side Gradient Inversion Against Federated Learning from Poisoning [59.74484221875662]
Federated Learning (FL) enables distributed participants to train a global model without sharing data directly to a central server.
Recent studies have revealed that FL is vulnerable to gradient inversion attack (GIA), which aims to reconstruct the original training samples.
We propose Client-side poisoning Gradient Inversion (CGI), which is a novel attack method that can be launched from clients.
arXiv Detail & Related papers (2023-09-14T03:48:27Z) - Attribute-preserving Face Dataset Anonymization via Latent Code
Optimization [64.4569739006591]
We present a task-agnostic anonymization procedure that directly optimize the images' latent representation in the latent space of a pre-trained GAN.
We demonstrate through a series of experiments that our method is capable of anonymizing the identity of the images whilst -- crucially -- better-preserving the facial attributes.
arXiv Detail & Related papers (2023-03-20T17:34:05Z) - Background Adaptive Faster R-CNN for Semi-Supervised Convolutional
Object Detection of Threats in X-Ray Images [64.39996451133268]
We present a semi-supervised approach for threat recognition which we call Background Adaptive Faster R-CNN.
This approach is a training method for two-stage object detectors which uses Domain Adaptation methods from the field of deep learning.
Two domain discriminators, one for discriminating object proposals and one for image features, are adversarially trained to prevent encoding domain-specific information.
This can reduce threat detection false alarm rates by matching the statistics of extracted features from hand-collected backgrounds to real world data.
arXiv Detail & Related papers (2020-10-02T21:05:13Z) - InfoScrub: Towards Attribute Privacy by Targeted Obfuscation [77.49428268918703]
We study techniques that allow individuals to limit the private information leaked in visual data.
We tackle this problem in a novel image obfuscation framework.
We find our approach generates obfuscated images faithful to the original input images, and additionally increase uncertainty by 6.2$times$ (or up to 0.85 bits) over the non-obfuscated counterparts.
arXiv Detail & Related papers (2020-05-20T19:48:04Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.