A Lightweight Privacy-Preserving Scheme Using Label-based Pixel Block
Mixing for Image Classification in Deep Learning
- URL: http://arxiv.org/abs/2105.08876v1
- Date: Wed, 19 May 2021 01:50:50 GMT
- Title: A Lightweight Privacy-Preserving Scheme Using Label-based Pixel Block
Mixing for Image Classification in Deep Learning
- Authors: Yuexin Xiang, Tiantian Li, Wei Ren, Tianqing Zhu, Kim-Kwang Raymond
Choo
- Abstract summary: We propose a lightweight and efficient approach to preserve image privacy while maintaining the availability of the training set.
We use the mixed training set to train the ResNet50, VGG16, InceptionV3 and DenseNet121 models on the WIKI dataset and the CNBC face dataset.
- Score: 37.33528407329338
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: To ensure the privacy of sensitive data used in the training of deep learning
models, a number of privacy-preserving methods have been designed by the
research community. However, existing schemes are generally designed to work
with textual data, or are not efficient when a large number of images is used
for training. Hence, in this paper we propose a lightweight and efficient
approach to preserve image privacy while maintaining the availability of the
training set. Specifically, we design the pixel block mixing algorithm for
image classification privacy preservation in deep learning. To evaluate its
utility, we use the mixed training set to train the ResNet50, VGG16,
InceptionV3 and DenseNet121 models on the WIKI dataset and the CNBC face
dataset. Experimental findings on the testing set show that our scheme
preserves image privacy while maintaining the availability of the training set
in the deep learning models. Additionally, the experimental results demonstrate
that we achieve good performance for the VGG16 model on the WIKI dataset and
both ResNet50 and DenseNet121 on the CNBC dataset. The pixel block algorithm
achieves fairly high efficiency in the mixing of the images, and it is
computationally challenging for the attackers to restore the mixed training set
to the original training set. Moreover, data augmentation can be applied to the
mixed training set to improve the training's effectiveness.
Related papers
- Image edge enhancement for effective image classification [7.470763273994321]
We propose an edge enhancement-based method to enhance both accuracy and training speed of neural networks.
Our approach involves extracting high frequency features, such as edges, from images within the available dataset and fusing them with the original images.
arXiv Detail & Related papers (2024-01-13T10:01:34Z) - Effective pruning of web-scale datasets based on complexity of concept
clusters [48.125618324485195]
We present a method for pruning large-scale multimodal datasets for training CLIP-style models on ImageNet.
We find that training on a smaller set of high-quality data can lead to higher performance with significantly lower training costs.
We achieve a new state-of-the-art Imagehttps://info.arxiv.org/help/prep#commentsNet zero-shot accuracy and a competitive average zero-shot accuracy on 38 evaluation tasks.
arXiv Detail & Related papers (2024-01-09T14:32:24Z) - CSP: Self-Supervised Contrastive Spatial Pre-Training for
Geospatial-Visual Representations [90.50864830038202]
We present Contrastive Spatial Pre-Training (CSP), a self-supervised learning framework for geo-tagged images.
We use a dual-encoder to separately encode the images and their corresponding geo-locations, and use contrastive objectives to learn effective location representations from images.
CSP significantly boosts the model performance with 10-34% relative improvement with various labeled training data sampling ratios.
arXiv Detail & Related papers (2023-05-01T23:11:18Z) - Combined Use of Federated Learning and Image Encryption for
Privacy-Preserving Image Classification with Vision Transformer [14.505867475659276]
We propose the combined use of federated learning (FL) and encrypted images for privacy-preserving image classification under the use of the vision transformer (ViT)
In an experiment, the proposed method was demonstrated to well work without any performance degradation on the CIFAR-10 and CIFAR-100 datasets.
arXiv Detail & Related papers (2023-01-23T03:41:02Z) - Learning Co-segmentation by Segment Swapping for Retrieval and Discovery [67.6609943904996]
The goal of this work is to efficiently identify visually similar patterns from a pair of images.
We generate synthetic training pairs by selecting object segments in an image and copy-pasting them into another image.
We show our approach provides clear improvements for artwork details retrieval on the Brueghel dataset.
arXiv Detail & Related papers (2021-10-29T16:51:16Z) - Learning Collision-Free Space Detection from Stereo Images: Homography
Matrix Brings Better Data Augmentation [16.99302954185652]
It remains an open challenge to train deep convolutional neural networks (DCNNs) using only a small quantity of training samples.
This paper explores an effective training data augmentation approach that can be employed to improve the overall DCNN performance.
arXiv Detail & Related papers (2020-12-14T19:14:35Z) - Shape-Texture Debiased Neural Network Training [50.6178024087048]
Convolutional Neural Networks are often biased towards either texture or shape, depending on the training dataset.
We develop an algorithm for shape-texture debiased learning.
Experiments show that our method successfully improves model performance on several image recognition benchmarks.
arXiv Detail & Related papers (2020-10-12T19:16:12Z) - Private Dataset Generation Using Privacy Preserving Collaborative
Learning [0.0]
This work introduces a privacy preserving FedNN framework for training machine learning models at edge.
The simulation results using MNIST dataset indicates the effectiveness of the framework.
arXiv Detail & Related papers (2020-04-28T15:35:20Z) - Cheaper Pre-training Lunch: An Efficient Paradigm for Object Detection [86.0580214485104]
We propose a general and efficient pre-training paradigm, Montage pre-training, for object detection.
Montage pre-training needs only the target detection dataset while taking only 1/4 computational resources compared to the widely adopted ImageNet pre-training.
The efficiency and effectiveness of Montage pre-training are validated by extensive experiments on the MS-COCO dataset.
arXiv Detail & Related papers (2020-04-25T16:09:46Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.