Roulette: A Semantic Privacy-Preserving Device-Edge Collaborative
Inference Framework for Deep Learning Classification Tasks
- URL: http://arxiv.org/abs/2309.02820v1
- Date: Wed, 6 Sep 2023 08:08:12 GMT
- Title: Roulette: A Semantic Privacy-Preserving Device-Edge Collaborative
Inference Framework for Deep Learning Classification Tasks
- Authors: Jingyi Li, Guocheng Liao, Lin Chen, and Xu Chen
- Abstract summary: Roulette is a task-oriented semantic privacy-preserving collaborative inference framework for deep learning classifiers.
We develop a novel paradigm of split learning where the back-end is frozen and the front-end is retrained to be both a feature extractor and an encryptor.
- Score: 21.05961694765183
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Deep learning classifiers are crucial in the age of artificial intelligence.
The device-edge-based collaborative inference has been widely adopted as an
efficient framework for promoting its applications in IoT and 5G/6G networks.
However, it suffers from accuracy degradation under non-i.i.d. data
distribution and privacy disclosure. For accuracy degradation, direct use of
transfer learning and split learning is high cost and privacy issues remain.
For privacy disclosure, cryptography-based approaches lead to a huge overhead.
Other lightweight methods assume that the ground truth is non-sensitive and can
be exposed. But for many applications, the ground truth is the user's crucial
privacy-sensitive information. In this paper, we propose a framework of
Roulette, which is a task-oriented semantic privacy-preserving collaborative
inference framework for deep learning classifiers. More than input data, we
treat the ground truth of the data as private information. We develop a novel
paradigm of split learning where the back-end DNN is frozen and the front-end
DNN is retrained to be both a feature extractor and an encryptor. Moreover, we
provide a differential privacy guarantee and analyze the hardness of ground
truth inference attacks. To validate the proposed Roulette, we conduct
extensive performance evaluations using realistic datasets, which demonstrate
that Roulette can effectively defend against various attacks and meanwhile
achieve good model accuracy. In a situation where the non-i.i.d. is very
severe, Roulette improves the inference accuracy by 21\% averaged over
benchmarks, while making the accuracy of discrimination attacks almost
equivalent to random guessing.
Related papers
- Pseudo-Probability Unlearning: Towards Efficient and Privacy-Preserving Machine Unlearning [59.29849532966454]
We propose PseudoProbability Unlearning (PPU), a novel method that enables models to forget data to adhere to privacy-preserving manner.
Our method achieves over 20% improvements in forgetting error compared to the state-of-the-art.
arXiv Detail & Related papers (2024-11-04T21:27:06Z) - Locally Differentially Private Gradient Tracking for Distributed Online
Learning over Directed Graphs [2.1271873498506038]
We propose a locally differentially private gradient tracking based distributed online learning algorithm.
We prove that the proposed algorithm converges in mean square to the exact optimal solution while ensuring rigorous local differential privacy.
arXiv Detail & Related papers (2023-10-24T18:15:25Z) - Independent Distribution Regularization for Private Graph Embedding [55.24441467292359]
Graph embeddings are susceptible to attribute inference attacks, which allow attackers to infer private node attributes from the learned graph embeddings.
To address these concerns, privacy-preserving graph embedding methods have emerged.
We propose a novel approach called Private Variational Graph AutoEncoders (PVGAE) with the aid of independent distribution penalty as a regularization term.
arXiv Detail & Related papers (2023-08-16T13:32:43Z) - Locally Differentially Private Distributed Online Learning with Guaranteed Optimality [1.800614371653704]
This paper proposes an approach that ensures both differential privacy and learning accuracy in distributed online learning.
While ensuring a diminishing expected instantaneous regret, the approach can simultaneously ensure a finite cumulative privacy budget.
To the best of our knowledge, this is the first algorithm that successfully ensures both rigorous local differential privacy and learning accuracy.
arXiv Detail & Related papers (2023-06-25T02:05:34Z) - When approximate design for fast homomorphic computation provides
differential privacy guarantees [0.08399688944263842]
Differential privacy (DP) and cryptographic primitives are popular countermeasures against privacy attacks.
In this paper, we design SHIELD, a probabilistic approximation algorithm for the argmax operator.
Even if SHIELD could have other applications, we here focus on one setting and seamlessly integrate it in the SPEED collaborative training framework.
arXiv Detail & Related papers (2023-04-06T09:38:01Z) - Pre-trained Encoders in Self-Supervised Learning Improve Secure and
Privacy-preserving Supervised Learning [63.45532264721498]
Self-supervised learning is an emerging technique to pre-train encoders using unlabeled data.
We perform first systematic, principled measurement study to understand whether and when a pretrained encoder can address the limitations of secure or privacy-preserving supervised learning algorithms.
arXiv Detail & Related papers (2022-12-06T21:35:35Z) - Privacy-Preserving Federated Learning on Partitioned Attributes [6.661716208346423]
Federated learning empowers collaborative training without exposing local data or models.
We introduce an adversarial learning based procedure which tunes a local model to release privacy-preserving intermediate representations.
To alleviate the accuracy decline, we propose a defense method based on the forward-backward splitting algorithm.
arXiv Detail & Related papers (2021-04-29T14:49:14Z) - On Deep Learning with Label Differential Privacy [54.45348348861426]
We study the multi-class classification setting where the labels are considered sensitive and ought to be protected.
We propose a new algorithm for training deep neural networks with label differential privacy, and run evaluations on several datasets.
arXiv Detail & Related papers (2021-02-11T15:09:06Z) - Learning while Respecting Privacy and Robustness to Distributional
Uncertainties and Adversarial Data [66.78671826743884]
The distributionally robust optimization framework is considered for training a parametric model.
The objective is to endow the trained model with robustness against adversarially manipulated input data.
Proposed algorithms offer robustness with little overhead.
arXiv Detail & Related papers (2020-07-07T18:25:25Z) - SPEED: Secure, PrivatE, and Efficient Deep learning [2.283665431721732]
We introduce a deep learning framework able to deal with strong privacy constraints.
Based on collaborative learning, differential privacy and homomorphic encryption, the proposed approach advances state-of-the-art.
arXiv Detail & Related papers (2020-06-16T19:31:52Z) - Adversarial Self-Supervised Contrastive Learning [62.17538130778111]
Existing adversarial learning approaches mostly use class labels to generate adversarial samples that lead to incorrect predictions.
We propose a novel adversarial attack for unlabeled data, which makes the model confuse the instance-level identities of the perturbed data samples.
We present a self-supervised contrastive learning framework to adversarially train a robust neural network without labeled data.
arXiv Detail & Related papers (2020-06-13T08:24:33Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.