Hiding Visual Information via Obfuscating Adversarial Perturbations
- URL: http://arxiv.org/abs/2209.15304v4
- Date: Mon, 28 Aug 2023 03:16:50 GMT
- Title: Hiding Visual Information via Obfuscating Adversarial Perturbations
- Authors: Zhigang Su and Dawei Zhou and Nannan Wangu and Decheng Li and Zhen
Wang and Xinbo Gao
- Abstract summary: We propose an adversarial visual information hiding method to protect the visual privacy of data.
Specifically, the method generates obfuscating adversarial perturbations to obscure the visual information of the data.
Experimental results on the recognition and classification tasks demonstrate that the proposed method can effectively hide visual information.
- Score: 47.315523613407244
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Growing leakage and misuse of visual information raise security and privacy
concerns, which promotes the development of information protection. Existing
adversarial perturbations-based methods mainly focus on the de-identification
against deep learning models. However, the inherent visual information of the
data has not been well protected. In this work, inspired by the Type-I
adversarial attack, we propose an adversarial visual information hiding method
to protect the visual privacy of data. Specifically, the method generates
obfuscating adversarial perturbations to obscure the visual information of the
data. Meanwhile, it maintains the hidden objectives to be correctly predicted
by models. In addition, our method does not modify the parameters of the
applied model, which makes it flexible for different scenarios. Experimental
results on the recognition and classification tasks demonstrate that the
proposed method can effectively hide visual information and hardly affect the
performances of models. The code is available in the supplementary material.
Related papers
- Enhancing User-Centric Privacy Protection: An Interactive Framework through Diffusion Models and Machine Unlearning [54.30994558765057]
The study pioneers a comprehensive privacy protection framework that safeguards image data privacy concurrently during data sharing and model publication.
We propose an interactive image privacy protection framework that utilizes generative machine learning models to modify image information at the attribute level.
Within this framework, we instantiate two modules: a differential privacy diffusion model for protecting attribute information in images and a feature unlearning algorithm for efficient updates of the trained model on the revised image dataset.
arXiv Detail & Related papers (2024-09-05T07:55:55Z) - Footprints of Data in a Classifier Model: The Privacy Issues and Their Mitigation through Data Obfuscation [0.9208007322096533]
embedding of footprints of training data in a prediction model is one such facet.
difference in performance quality in test and training data causes passive identification of data that have trained the model.
This research focuses on addressing the vulnerability arising from the data footprints.
arXiv Detail & Related papers (2024-07-02T13:56:37Z) - Deep Variational Privacy Funnel: General Modeling with Applications in
Face Recognition [3.351714665243138]
We develop a method for privacy-preserving representation learning using an end-to-end training framework.
We apply our model to state-of-the-art face recognition systems.
arXiv Detail & Related papers (2024-01-26T11:32:53Z) - $\alpha$-Mutual Information: A Tunable Privacy Measure for Privacy
Protection in Data Sharing [4.475091558538915]
This paper adopts Arimoto's $alpha$-Mutual Information as a tunable privacy measure.
We formulate a general distortion-based mechanism that manipulates the original data to offer privacy protection.
arXiv Detail & Related papers (2023-10-27T16:26:14Z) - Diff-Privacy: Diffusion-based Face Privacy Protection [58.1021066224765]
In this paper, we propose a novel face privacy protection method based on diffusion models, dubbed Diff-Privacy.
Specifically, we train our proposed multi-scale image inversion module (MSI) to obtain a set of SDM format conditional embeddings of the original image.
Based on the conditional embeddings, we design corresponding embedding scheduling strategies and construct different energy functions during the denoising process to achieve anonymization and visual identity information hiding.
arXiv Detail & Related papers (2023-09-11T09:26:07Z) - Free-ATM: Exploring Unsupervised Learning on Diffusion-Generated Images
with Free Attention Masks [64.67735676127208]
Text-to-image diffusion models have shown great potential for benefiting image recognition.
Although promising, there has been inadequate exploration dedicated to unsupervised learning on diffusion-generated images.
We introduce customized solutions by fully exploiting the aforementioned free attention masks.
arXiv Detail & Related papers (2023-08-13T10:07:46Z) - A Review on Visual Privacy Preservation Techniques for Active and
Assisted Living [0.0]
A novel taxonomy with which state-of-the-art visual privacy protection methods can be classified is introduced.
Perceptual obfuscation methods, a category in the taxonomy, is highlighted.
Obfuscation against machine learning models is also explored.
arXiv Detail & Related papers (2021-12-17T10:37:30Z) - Conditional Contrastive Learning: Removing Undesirable Information in
Self-Supervised Representations [108.29288034509305]
We develop conditional contrastive learning to remove undesirable information in self-supervised representations.
We demonstrate empirically that our methods can successfully learn self-supervised representations for downstream tasks.
arXiv Detail & Related papers (2021-06-05T10:51:26Z) - Stylized Adversarial Defense [105.88250594033053]
adversarial training creates perturbation patterns and includes them in the training set to robustify the model.
We propose to exploit additional information from the feature space to craft stronger adversaries.
Our adversarial training approach demonstrates strong robustness compared to state-of-the-art defenses.
arXiv Detail & Related papers (2020-07-29T08:38:10Z) - Reinforcement learning for the privacy preservation and manipulation of
eye tracking data [12.486057928762898]
We present an approach based on reinforcement learning for eye tracking data manipulation.
We show that our approach is successfully applicable to preserve the privacy of the subjects.
arXiv Detail & Related papers (2020-02-17T07:02:19Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.