A Comparative Study of Image Disguising Methods for Confidential
Outsourced Learning
- URL: http://arxiv.org/abs/2301.00252v1
- Date: Sat, 31 Dec 2022 16:59:54 GMT
- Title: A Comparative Study of Image Disguising Methods for Confidential
Outsourced Learning
- Authors: Sagar Sharma and Yuechun Gu and Keke Chen
- Abstract summary: We study and compare novel emphimage disguising mechanisms, DisguisedNets and InstaHide.
DisguisedNets are novel combinations of image blocktization, block-level random permutation, and two block-level secure transformations.
InstaHide is an image mixup and random pixel flipping technique.
We have analyzed and evaluated them under a multi-level threat model.
- Score: 5.73658856166614
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Large training data and expensive model tweaking are standard features of
deep learning for images. As a result, data owners often utilize cloud
resources to develop large-scale complex models, which raises privacy concerns.
Existing solutions are either too expensive to be practical or do not
sufficiently protect the confidentiality of data and models. In this paper, we
study and compare novel \emph{image disguising} mechanisms, DisguisedNets and
InstaHide, aiming to achieve a better trade-off among the level of protection
for outsourced DNN model training, the expenses, and the utility of data.
DisguisedNets are novel combinations of image blocktization, block-level random
permutation, and two block-level secure transformations: random
multidimensional projection (RMT) and AES pixel-level encryption (AES).
InstaHide is an image mixup and random pixel flipping technique \cite{huang20}.
We have analyzed and evaluated them under a multi-level threat model. RMT
provides a better security guarantee than InstaHide, under the Level-1
adversarial knowledge with well-preserved model quality. In contrast, AES
provides a security guarantee under the Level-2 adversarial knowledge, but it
may affect model quality more. The unique features of image disguising also
help us to protect models from model-targeted attacks. We have done an
extensive experimental evaluation to understand how these methods work in
different settings for different datasets.
Related papers
- Elucidating Optimal Reward-Diversity Tradeoffs in Text-to-Image Diffusion Models [20.70550870149442]
We introduce Annealed Importance Guidance (AIG), an inference-time regularization inspired by Annealed Importance Sampling.
Our experiments demonstrate the benefits of AIG for Stable Diffusion models, striking the optimal balance between reward optimization and image diversity.
arXiv Detail & Related papers (2024-09-09T16:27:26Z) - Enhancing User-Centric Privacy Protection: An Interactive Framework through Diffusion Models and Machine Unlearning [54.30994558765057]
The study pioneers a comprehensive privacy protection framework that safeguards image data privacy concurrently during data sharing and model publication.
We propose an interactive image privacy protection framework that utilizes generative machine learning models to modify image information at the attribute level.
Within this framework, we instantiate two modules: a differential privacy diffusion model for protecting attribute information in images and a feature unlearning algorithm for efficient updates of the trained model on the revised image dataset.
arXiv Detail & Related papers (2024-09-05T07:55:55Z) - Direct Unlearning Optimization for Robust and Safe Text-to-Image Models [29.866192834825572]
Unlearning techniques have been developed to remove the model's ability to generate potentially harmful content.
These methods are easily bypassed by adversarial attacks, making them unreliable for ensuring the safety of generated images.
We propose Direct Unlearning Optimization (DUO), a novel framework for removing Not Safe For Work (NSFW) content from T2I models.
arXiv Detail & Related papers (2024-07-17T08:19:11Z) - Reinforcing Pre-trained Models Using Counterfactual Images [54.26310919385808]
This paper proposes a novel framework to reinforce classification models using language-guided generated counterfactual images.
We identify model weaknesses by testing the model using the counterfactual image dataset.
We employ the counterfactual images as an augmented dataset to fine-tune and reinforce the classification model.
arXiv Detail & Related papers (2024-06-19T08:07:14Z) - Gradient Inversion of Federated Diffusion Models [4.1355611383748005]
Diffusion models are becoming defector generative models, which generate exceptionally high-resolution image data.
In this paper, we study the privacy risk of gradient inversion attacks.
We propose a triple-optimization GIDM+ that coordinates the optimization of the unknown data.
arXiv Detail & Related papers (2024-05-30T18:00:03Z) - Privacy Backdoors: Enhancing Membership Inference through Poisoning Pre-trained Models [112.48136829374741]
In this paper, we unveil a new vulnerability: the privacy backdoor attack.
When a victim fine-tunes a backdoored model, their training data will be leaked at a significantly higher rate than if they had fine-tuned a typical model.
Our findings highlight a critical privacy concern within the machine learning community and call for a reevaluation of safety protocols in the use of open-source pre-trained models.
arXiv Detail & Related papers (2024-04-01T16:50:54Z) - Enhance Image Classification via Inter-Class Image Mixup with Diffusion Model [80.61157097223058]
A prevalent strategy to bolster image classification performance is through augmenting the training set with synthetic images generated by T2I models.
In this study, we scrutinize the shortcomings of both current generative and conventional data augmentation techniques.
We introduce an innovative inter-class data augmentation method known as Diff-Mix, which enriches the dataset by performing image translations between classes.
arXiv Detail & Related papers (2024-03-28T17:23:45Z) - Latent Diffusion Models for Attribute-Preserving Image Anonymization [4.080920304681247]
This paper presents the first approach to image anonymization based on Latent Diffusion Models (LDMs)
We propose two LDMs for this purpose: CAFLaGE-Base exploits a combination of pre-trained ControlNets, and a new controlling mechanism designed to increase the distance between the real and anonymized images.
arXiv Detail & Related papers (2024-03-21T19:09:21Z) - Minimum Noticeable Difference based Adversarial Privacy Preserving Image
Generation [44.2692621807947]
We develop a framework to generate adversarial privacy preserving images that have minimum perceptual difference from the clean ones but are able to attack deep learning models.
To the best of our knowledge, this is the first work on exploring quality-preserving adversarial image generation based on the MND concept for privacy preserving.
arXiv Detail & Related papers (2022-06-17T09:02:12Z) - Towards Unsupervised Deep Image Enhancement with Generative Adversarial
Network [92.01145655155374]
We present an unsupervised image enhancement generative network (UEGAN)
It learns the corresponding image-to-image mapping from a set of images with desired characteristics in an unsupervised manner.
Results show that the proposed model effectively improves the aesthetic quality of images.
arXiv Detail & Related papers (2020-12-30T03:22:46Z) - Model Watermarking for Image Processing Networks [120.918532981871]
How to protect the intellectual property of deep models is a very important but seriously under-researched problem.
We propose the first model watermarking framework for protecting image processing models.
arXiv Detail & Related papers (2020-02-25T18:36:18Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.