A Comparative Study of Image Disguising Methods for Confidential
Outsourced Learning
- URL: http://arxiv.org/abs/2301.00252v1
- Date: Sat, 31 Dec 2022 16:59:54 GMT
- Title: A Comparative Study of Image Disguising Methods for Confidential
Outsourced Learning
- Authors: Sagar Sharma and Yuechun Gu and Keke Chen
- Abstract summary: We study and compare novel emphimage disguising mechanisms, DisguisedNets and InstaHide.
DisguisedNets are novel combinations of image blocktization, block-level random permutation, and two block-level secure transformations.
InstaHide is an image mixup and random pixel flipping technique.
We have analyzed and evaluated them under a multi-level threat model.
- Score: 5.73658856166614
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Large training data and expensive model tweaking are standard features of
deep learning for images. As a result, data owners often utilize cloud
resources to develop large-scale complex models, which raises privacy concerns.
Existing solutions are either too expensive to be practical or do not
sufficiently protect the confidentiality of data and models. In this paper, we
study and compare novel \emph{image disguising} mechanisms, DisguisedNets and
InstaHide, aiming to achieve a better trade-off among the level of protection
for outsourced DNN model training, the expenses, and the utility of data.
DisguisedNets are novel combinations of image blocktization, block-level random
permutation, and two block-level secure transformations: random
multidimensional projection (RMT) and AES pixel-level encryption (AES).
InstaHide is an image mixup and random pixel flipping technique \cite{huang20}.
We have analyzed and evaluated them under a multi-level threat model. RMT
provides a better security guarantee than InstaHide, under the Level-1
adversarial knowledge with well-preserved model quality. In contrast, AES
provides a security guarantee under the Level-2 adversarial knowledge, but it
may affect model quality more. The unique features of image disguising also
help us to protect models from model-targeted attacks. We have done an
extensive experimental evaluation to understand how these methods work in
different settings for different datasets.
Related papers
- Reinforcing Pre-trained Models Using Counterfactual Images [54.26310919385808]
This paper proposes a novel framework to reinforce classification models using language-guided generated counterfactual images.
We identify model weaknesses by testing the model using the counterfactual image dataset.
We employ the counterfactual images as an augmented dataset to fine-tune and reinforce the classification model.
arXiv Detail & Related papers (2024-06-19T08:07:14Z) - Gradient Inversion of Federated Diffusion Models [4.1355611383748005]
Diffusion models are becoming defector generative models, which generate exceptionally high-resolution image data.
In this paper, we study the privacy risk of gradient inversion attacks.
We propose a triple-optimization GIDM+ that coordinates the optimization of the unknown data.
arXiv Detail & Related papers (2024-05-30T18:00:03Z) - Privacy Backdoors: Enhancing Membership Inference through Poisoning Pre-trained Models [112.48136829374741]
In this paper, we unveil a new vulnerability: the privacy backdoor attack.
When a victim fine-tunes a backdoored model, their training data will be leaked at a significantly higher rate than if they had fine-tuned a typical model.
Our findings highlight a critical privacy concern within the machine learning community and call for a reevaluation of safety protocols in the use of open-source pre-trained models.
arXiv Detail & Related papers (2024-04-01T16:50:54Z) - Enhance Image Classification via Inter-Class Image Mixup with Diffusion Model [80.61157097223058]
A prevalent strategy to bolster image classification performance is through augmenting the training set with synthetic images generated by T2I models.
In this study, we scrutinize the shortcomings of both current generative and conventional data augmentation techniques.
We introduce an innovative inter-class data augmentation method known as Diff-Mix, which enriches the dataset by performing image translations between classes.
arXiv Detail & Related papers (2024-03-28T17:23:45Z) - Latent Diffusion Models for Attribute-Preserving Image Anonymization [4.080920304681247]
This paper presents the first approach to image anonymization based on Latent Diffusion Models (LDMs)
We propose two LDMs for this purpose: CAFLaGE-Base exploits a combination of pre-trained ControlNets, and a new controlling mechanism designed to increase the distance between the real and anonymized images.
arXiv Detail & Related papers (2024-03-21T19:09:21Z) - Minimum Noticeable Difference based Adversarial Privacy Preserving Image
Generation [44.2692621807947]
We develop a framework to generate adversarial privacy preserving images that have minimum perceptual difference from the clean ones but are able to attack deep learning models.
To the best of our knowledge, this is the first work on exploring quality-preserving adversarial image generation based on the MND concept for privacy preserving.
arXiv Detail & Related papers (2022-06-17T09:02:12Z) - Towards Unsupervised Deep Image Enhancement with Generative Adversarial
Network [92.01145655155374]
We present an unsupervised image enhancement generative network (UEGAN)
It learns the corresponding image-to-image mapping from a set of images with desired characteristics in an unsupervised manner.
Results show that the proposed model effectively improves the aesthetic quality of images.
arXiv Detail & Related papers (2020-12-30T03:22:46Z) - Dual Manifold Adversarial Robustness: Defense against Lp and non-Lp
Adversarial Attacks [154.31827097264264]
Adversarial training is a popular defense strategy against attack threat models with bounded Lp norms.
We propose Dual Manifold Adversarial Training (DMAT) where adversarial perturbations in both latent and image spaces are used in robustifying the model.
Our DMAT improves performance on normal images, and achieves comparable robustness to the standard adversarial training against Lp attacks.
arXiv Detail & Related papers (2020-09-05T06:00:28Z) - Large-Scale Secure XGB for Vertical Federated Learning [15.864654742542246]
In this paper, we aim to build large-scale secure XGB under vertically federated learning setting.
We employ secure multi-party computation techniques to avoid leaking intermediate information during training.
By proposing secure permutation protocols, we can improve the training efficiency and make the framework scale to large dataset.
arXiv Detail & Related papers (2020-05-18T06:31:10Z) - Model Watermarking for Image Processing Networks [120.918532981871]
How to protect the intellectual property of deep models is a very important but seriously under-researched problem.
We propose the first model watermarking framework for protecting image processing models.
arXiv Detail & Related papers (2020-02-25T18:36:18Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.