Differentially Private CutMix for Split Learning with Vision Transformer
- URL: http://arxiv.org/abs/2210.15986v1
- Date: Fri, 28 Oct 2022 08:33:29 GMT
- Title: Differentially Private CutMix for Split Learning with Vision Transformer
- Authors: Seungeun Oh, Jihong Park, Sihun Baek, Hyelin Nam, Praneeth Vepakomma,
Ramesh Raskar, Mehdi Bennis, Seong-Lyun Kim
- Abstract summary: Vision transformer (ViT) has started to outpace the conventional CNN in computer vision tasks.
Considering privacy-preserving distributed learning with ViT, we propose DP-CutMixSL.
- Score: 42.47713044228984
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Recently, vision transformer (ViT) has started to outpace the conventional
CNN in computer vision tasks. Considering privacy-preserving distributed
learning with ViT, federated learning (FL) communicates models, which becomes
ill-suited due to ViT' s large model size and computing costs. Split learning
(SL) detours this by communicating smashed data at a cut-layer, yet suffers
from data privacy leakage and large communication costs caused by high
similarity between ViT' s smashed data and input data. Motivated by this
problem, we propose DP-CutMixSL, a differentially private (DP) SL framework by
developing DP patch-level randomized CutMix (DP-CutMix), a novel
privacy-preserving inter-client interpolation scheme that replaces randomly
selected patches in smashed data. By experiment, we show that DP-CutMixSL not
only boosts privacy guarantees and communication efficiency, but also achieves
higher accuracy than its Vanilla SL counterpart. Theoretically, we analyze that
DP-CutMix amplifies R\'enyi DP (RDP), which is upper-bounded by its Vanilla
Mixup counterpart.
Related papers
- DMM: Distributed Matrix Mechanism for Differentially-Private Federated Learning using Packed Secret Sharing [51.336015600778396]
Federated Learning (FL) has gained lots of traction recently, both in industry and academia.
In FL, a machine learning model is trained using data from various end-users arranged in committees across several rounds.
Since such data can often be sensitive, a primary challenge in FL is providing privacy while still retaining utility of the model.
arXiv Detail & Related papers (2024-10-21T16:25:14Z) - Privacy-Preserving Split Learning with Vision Transformers using Patch-Wise Random and Noisy CutMix [38.370923655357366]
In computer vision, the vision transformer (ViT) has increasingly superseded the convolutional neural network (CNN) for improved accuracy and robustness.
Split learning (SL) emerges as a viable solution, leveraging server-side resources to train ViTs while utilizing private data from distributed devices.
We propose a novel privacy-preserving SL framework that injects Gaussian noise into smashed data and mixes randomly chosen patches of smashed data across clients, coined DP-CutMixSL.
arXiv Detail & Related papers (2024-08-02T06:24:39Z) - Visual Transformer Meets CutMix for Improved Accuracy, Communication
Efficiency, and Data Privacy in Split Learning [47.266470238551314]
This article seeks for a distributed learning solution for the visual transformer (ViT) architectures.
ViTs often have larger model sizes, and are computationally expensive, making federated learning (FL) ill-suited.
We propose a new form of CutSmashed data by randomly punching and compressing the original smashed data.
We develop a novel SL framework for ViT, coined CutMixSL, communicating CutSmashed data.
arXiv Detail & Related papers (2022-07-01T07:00:30Z) - Mixed Differential Privacy in Computer Vision [133.68363478737058]
AdaMix is an adaptive differentially private algorithm for training deep neural network classifiers using both private and public image data.
A few-shot or even zero-shot learning baseline that ignores private data can outperform fine-tuning on a large private dataset.
arXiv Detail & Related papers (2022-03-22T06:15:43Z) - On the Convergence and Calibration of Deep Learning with Differential
Privacy [12.297499996547925]
Differentially private (DP) training preserves the data privacy usually at the cost of slower convergence.
We show that noise addition only affects the privacy risk but not the convergence or calibration.
In sharp contrast, DP models trained with large clipping norm enjoy the same privacy guarantee and similar accuracy, but are significantly more textitd
arXiv Detail & Related papers (2021-06-15T01:32:29Z) - Wireless Federated Learning with Limited Communication and Differential
Privacy [21.328507360172203]
This paper investigates the role of dimensionality reduction in efficient communication and differential privacy (DP) of the local datasets at the remote users for over-the-air computation (AirComp)-based federated learning (FL) model.
arXiv Detail & Related papers (2021-06-01T15:23:12Z) - AirMixML: Over-the-Air Data Mixup for Inherently Privacy-Preserving Edge
Machine Learning [54.52660257575346]
We propose a privacy-preserving machine learning framework at the network edge, coined over-the-air mixup ML (AirMixML)
In AirMixML, multiple workers transmit analog-modulated signals of their private data samples to an edge server who trains an ML model using the received noisy-and superpositioned samples.
By simulations, we provide DirMix(alpha)-PC design guidelines to improve accuracy, privacy, and energy-efficiency.
arXiv Detail & Related papers (2021-05-02T05:45:43Z) - Differentially Private Federated Learning with Laplacian Smoothing [72.85272874099644]
Federated learning aims to protect data privacy by collaboratively learning a model without sharing private data among users.
An adversary may still be able to infer the private training data by attacking the released model.
Differential privacy provides a statistical protection against such attacks at the price of significantly degrading the accuracy or utility of the trained models.
arXiv Detail & Related papers (2020-05-01T04:28:38Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.