Combined Use of Federated Learning and Image Encryption for
Privacy-Preserving Image Classification with Vision Transformer
- URL: http://arxiv.org/abs/2301.09255v1
- Date: Mon, 23 Jan 2023 03:41:02 GMT
- Title: Combined Use of Federated Learning and Image Encryption for
Privacy-Preserving Image Classification with Vision Transformer
- Authors: Teru Nagamori and Hitoshi Kiya
- Abstract summary: We propose the combined use of federated learning (FL) and encrypted images for privacy-preserving image classification under the use of the vision transformer (ViT)
In an experiment, the proposed method was demonstrated to well work without any performance degradation on the CIFAR-10 and CIFAR-100 datasets.
- Score: 14.505867475659276
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: In recent years, privacy-preserving methods for deep learning have become an
urgent problem. Accordingly, we propose the combined use of federated learning
(FL) and encrypted images for privacy-preserving image classification under the
use of the vision transformer (ViT). The proposed method allows us not only to
train models over multiple participants without directly sharing their raw data
but to also protect the privacy of test (query) images for the first time. In
addition, it can also maintain the same accuracy as normally trained models. In
an experiment, the proposed method was demonstrated to well work without any
performance degradation on the CIFAR-10 and CIFAR-100 datasets.
Related papers
- Frequency-Guided Masking for Enhanced Vision Self-Supervised Learning [49.275450836604726]
We present a novel frequency-based Self-Supervised Learning (SSL) approach that significantly enhances its efficacy for pre-training.
We employ a two-branch framework empowered by knowledge distillation, enabling the model to take both the filtered and original images as input.
arXiv Detail & Related papers (2024-09-16T15:10:07Z) - Disposable-key-based image encryption for collaborative learning of Vision Transformer [5.762345156477736]
We propose a novel method for securely training the vision transformer (ViT) with sensitive data shared from multiple clients similar to privacy-preserving federated learning.
In the proposed method, training images are independently encrypted by each client where encryption keys can be prepared by each client, and ViT is trained by using these encrypted images for the first time.
arXiv Detail & Related papers (2024-08-11T09:55:37Z) - Federated Face Forgery Detection Learning with Personalized Representation [63.90408023506508]
Deep generator technology can produce high-quality fake videos that are indistinguishable, posing a serious social threat.
Traditional forgery detection methods directly centralized training on data.
The paper proposes a novel federated face forgery detection learning with personalized representation.
arXiv Detail & Related papers (2024-06-17T02:20:30Z) - I can't see it but I can Fine-tune it: On Encrypted Fine-tuning of
Transformers using Fully Homomorphic Encryption [5.12893315783096]
We introduce BlindTuner, a privacy-preserving fine-tuning system that enables transformer training exclusively on homomorphically encrypted data for image classification.
Our findings highlight a substantial speed enhancement of 1.5x to 600x over previous work in this domain.
arXiv Detail & Related papers (2024-02-14T10:15:43Z) - Efficient Fine-Tuning with Domain Adaptation for Privacy-Preserving
Vision Transformer [6.476298483207895]
We propose a novel method for privacy-preserving deep neural networks (DNNs) with the Vision Transformer (ViT)
The method allows us not only to train models and test with visually protected images but to also avoid the performance degradation caused from the use of encrypted images.
A domain adaptation method is used to efficiently fine-tune ViT with encrypted images.
arXiv Detail & Related papers (2024-01-10T12:46:31Z) - ConfounderGAN: Protecting Image Data Privacy with Causal Confounder [85.6757153033139]
We propose ConfounderGAN, a generative adversarial network (GAN) that can make personal image data unlearnable to protect the data privacy of its owners.
Experiments are conducted in six image classification datasets, consisting of three natural object datasets and three medical datasets.
arXiv Detail & Related papers (2022-12-04T08:49:14Z) - Privacy-Preserving Image Classification Using Vision Transformer [16.679394807198]
We propose a privacy-preserving image classification method that is based on the combined use of encrypted images and the vision transformer (ViT)
ViT utilizes patch embedding and position embedding for image patches, so this architecture is shown to reduce the influence of block-wise image transformation.
In an experiment, the proposed method for privacy-preserving image classification is demonstrated to outperform state-of-the-art methods in terms of classification accuracy and robustness against various attacks.
arXiv Detail & Related papers (2022-05-24T12:51:48Z) - OPOM: Customized Invisible Cloak towards Face Privacy Protection [58.07786010689529]
We investigate the face privacy protection from a technology standpoint based on a new type of customized cloak.
We propose a new method, named one person one mask (OPOM), to generate person-specific (class-wise) universal masks.
The effectiveness of the proposed method is evaluated on both common and celebrity datasets.
arXiv Detail & Related papers (2022-05-24T11:29:37Z) - Federated Test-Time Adaptive Face Presentation Attack Detection with
Dual-Phase Privacy Preservation [100.69458267888962]
Face presentation attack detection (fPAD) plays a critical role in the modern face recognition pipeline.
Due to legal and privacy issues, training data (real face images and spoof images) are not allowed to be directly shared between different data sources.
We propose a Federated Test-Time Adaptive Face Presentation Attack Detection with Dual-Phase Privacy Preservation framework.
arXiv Detail & Related papers (2021-10-25T02:51:05Z) - PASS: An ImageNet replacement for self-supervised pretraining without
humans [152.3252728876108]
We propose an unlabelled dataset PASS: Pictures without humAns for Self-Supervision.
PASS only contains images with CC-BY license and complete attribution metadata, addressing the copyright issue.
We show that PASS can be used for pretraining with methods such as MoCo-v2, SwAV and DINO.
PASS does not make existing datasets obsolete, as for instance it is insufficient for benchmarking. However, it shows that model pretraining is often possible while using safer data, and it also provides the basis for a more robust evaluation of pretraining methods.
arXiv Detail & Related papers (2021-09-27T17:59:39Z) - A Lightweight Privacy-Preserving Scheme Using Label-based Pixel Block
Mixing for Image Classification in Deep Learning [37.33528407329338]
We propose a lightweight and efficient approach to preserve image privacy while maintaining the availability of the training set.
We use the mixed training set to train the ResNet50, VGG16, InceptionV3 and DenseNet121 models on the WIKI dataset and the CNBC face dataset.
arXiv Detail & Related papers (2021-05-19T01:50:50Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.