Efficient Fine-Tuning with Domain Adaptation for Privacy-Preserving
Vision Transformer
- URL: http://arxiv.org/abs/2401.05126v2
- Date: Fri, 9 Feb 2024 09:55:46 GMT
- Title: Efficient Fine-Tuning with Domain Adaptation for Privacy-Preserving
Vision Transformer
- Authors: Teru Nagamori, Sayaka Shiota, Hitoshi Kiya
- Abstract summary: We propose a novel method for privacy-preserving deep neural networks (DNNs) with the Vision Transformer (ViT)
The method allows us not only to train models and test with visually protected images but to also avoid the performance degradation caused from the use of encrypted images.
A domain adaptation method is used to efficiently fine-tune ViT with encrypted images.
- Score: 6.476298483207895
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: We propose a novel method for privacy-preserving deep neural networks (DNNs)
with the Vision Transformer (ViT). The method allows us not only to train
models and test with visually protected images but to also avoid the
performance degradation caused from the use of encrypted images, whereas
conventional methods cannot avoid the influence of image encryption. A domain
adaptation method is used to efficiently fine-tune ViT with encrypted images.
In experiments, the method is demonstrated to outperform conventional methods
in an image classification task on the CIFAR-10 and ImageNet datasets in terms
of classification accuracy.
Related papers
- Deep Learning Based Speckle Filtering for Polarimetric SAR Images. Application to Sentinel-1 [51.404644401997736]
We propose a complete framework to remove speckle in polarimetric SAR images using a convolutional neural network.
Experiments show that the proposed approach offers exceptional results in both speckle reduction and resolution preservation.
arXiv Detail & Related papers (2024-08-28T10:07:17Z) - Domain Adaptation for Efficiently Fine-tuning Vision Transformer with
Encrypted Images [6.476298483207895]
We propose a novel method for fine-tuning models with transformed images under the use of the vision transformer (ViT)
The proposed domain adaptation method does not cause the degradation accuracy of models, and it is carried out on the basis of the embedding structure of ViT.
In experiments, we confirmed that the proposed method prevents accuracy degradation even when using encrypted images with the CIFAR-10 and CIFAR-100 datasets.
arXiv Detail & Related papers (2023-09-05T19:45:27Z) - Attribute-Guided Encryption with Facial Texture Masking [64.77548539959501]
We propose Attribute Guided Encryption with Facial Texture Masking to protect users from unauthorized facial recognition systems.
Our proposed method produces more natural-looking encrypted images than state-of-the-art methods.
arXiv Detail & Related papers (2023-05-22T23:50:43Z) - Combined Use of Federated Learning and Image Encryption for
Privacy-Preserving Image Classification with Vision Transformer [14.505867475659276]
We propose the combined use of federated learning (FL) and encrypted images for privacy-preserving image classification under the use of the vision transformer (ViT)
In an experiment, the proposed method was demonstrated to well work without any performance degradation on the CIFAR-10 and CIFAR-100 datasets.
arXiv Detail & Related papers (2023-01-23T03:41:02Z) - Privacy-Preserving Image Classification Using Vision Transformer [16.679394807198]
We propose a privacy-preserving image classification method that is based on the combined use of encrypted images and the vision transformer (ViT)
ViT utilizes patch embedding and position embedding for image patches, so this architecture is shown to reduce the influence of block-wise image transformation.
In an experiment, the proposed method for privacy-preserving image classification is demonstrated to outperform state-of-the-art methods in terms of classification accuracy and robustness against various attacks.
arXiv Detail & Related papers (2022-05-24T12:51:48Z) - Privacy-Preserving Image Classification Using Isotropic Network [14.505867475659276]
We propose a privacy-preserving image classification method that uses encrypted images and an isotropic network such as the vision transformer.
The proposed method allows us not only to apply images without visual information to deep neural networks (DNNs) for both training and testing but also to maintain a high classification accuracy.
arXiv Detail & Related papers (2022-04-16T03:15:54Z) - Image Restoration by Deep Projected GSURE [115.57142046076164]
Ill-posed inverse problems appear in many image processing applications, such as deblurring and super-resolution.
We propose a new image restoration framework that is based on minimizing a loss function that includes a "projected-version" of the Generalized SteinUnbiased Risk Estimator (GSURE) and parameterization of the latent image by a CNN.
arXiv Detail & Related papers (2021-02-04T08:52:46Z) - Image Transformation Network for Privacy-Preserving Deep Neural Networks
and Its Security Evaluation [17.134566958534634]
We propose a transformation network for generating visually-protected images for privacy-preserving DNNs.
The proposed network enables us not only to strongly protect visual information but also to maintain the image classification accuracy that using plain images achieves.
arXiv Detail & Related papers (2020-08-07T12:58:45Z) - FeatMatch: Feature-Based Augmentation for Semi-Supervised Learning [64.32306537419498]
We propose a novel learned feature-based refinement and augmentation method that produces a varied set of complex transformations.
These transformations also use information from both within-class and across-class representations that we extract through clustering.
We demonstrate that our method is comparable to current state of art for smaller datasets while being able to scale up to larger datasets.
arXiv Detail & Related papers (2020-07-16T17:55:31Z) - Learning to Learn Parameterized Classification Networks for Scalable
Input Images [76.44375136492827]
Convolutional Neural Networks (CNNs) do not have a predictable recognition behavior with respect to the input resolution change.
We employ meta learners to generate convolutional weights of main networks for various input scales.
We further utilize knowledge distillation on the fly over model predictions based on different input resolutions.
arXiv Detail & Related papers (2020-07-13T04:27:25Z) - Supervised and Unsupervised Learning of Parameterized Color Enhancement [112.88623543850224]
We tackle the problem of color enhancement as an image translation task using both supervised and unsupervised learning.
We achieve state-of-the-art results compared to both supervised (paired data) and unsupervised (unpaired data) image enhancement methods on the MIT-Adobe FiveK benchmark.
We show the generalization capability of our method, by applying it on photos from the early 20th century and to dark video frames.
arXiv Detail & Related papers (2019-12-30T13:57:06Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.