PriPHiT: Privacy-Preserving Hierarchical Training of Deep Neural Networks
- URL: http://arxiv.org/abs/2408.05092v2
- Date: Mon, 16 Dec 2024 10:10:10 GMT
- Title: PriPHiT: Privacy-Preserving Hierarchical Training of Deep Neural Networks
- Authors: Yamin Sepehri, Pedram Pad, Pascal Frossard, L. Andrea Dunbar,
- Abstract summary: We propose a method to perform the training phase of a deep learning model on both an edge device and a cloud server.
The proposed privacy-preserving method uses adversarial early exits to suppress the sensitive content at the edge and transmits the task-relevant information to the cloud.
- Score: 44.0097014096626
- License:
- Abstract: The training phase of deep neural networks requires substantial resources and as such is often performed on cloud servers. However, this raises privacy concerns when the training dataset contains sensitive content, e.g., facial or medical images. In this work, we propose a method to perform the training phase of a deep learning model on both an edge device and a cloud server that prevents sensitive content being transmitted to the cloud while retaining the desired information. The proposed privacy-preserving method uses adversarial early exits to suppress the sensitive content at the edge and transmits the task-relevant information to the cloud. This approach incorporates noise addition during the training phase to provide a differential privacy guarantee. We extensively test our method on different facial and medical datasets with diverse attributes using various deep learning architectures, showcasing its outstanding performance. We also demonstrate the effectiveness of privacy preservation through successful defenses against different white-box, deep and GAN-based reconstruction attacks. This approach is designed for resource-constrained edge devices, ensuring minimal memory usage and computational overhead.
Related papers
- Activity Recognition on Avatar-Anonymized Datasets with Masked Differential Privacy [64.32494202656801]
Privacy-preserving computer vision is an important emerging problem in machine learning and artificial intelligence.
We present anonymization pipeline that replaces sensitive human subjects in video datasets with synthetic avatars within context.
We also proposeMaskDP to protect non-anonymized but privacy sensitive background information.
arXiv Detail & Related papers (2024-10-22T15:22:53Z) - Investigating Privacy Attacks in the Gray-Box Setting to Enhance Collaborative Learning Schemes [7.651569149118461]
We study privacy attacks in the gray-box setting, where the attacker has only limited access to the model.
We deploy SmartNNCrypt, a framework that tailors homomorphic encryption to protect the portions of the model posing higher privacy risks.
arXiv Detail & Related papers (2024-09-25T18:49:21Z) - Privacy-Preserving Deep Learning Using Deformable Operators for Secure Task Learning [14.187385349716518]
Existing methods for privacy preservation rely on image encryption or perceptual transformation approaches.
We propose a novel Privacy-Preserving framework that uses a set of deformable operators for secure task learning.
arXiv Detail & Related papers (2024-04-08T19:46:20Z) - Attribute-preserving Face Dataset Anonymization via Latent Code
Optimization [64.4569739006591]
We present a task-agnostic anonymization procedure that directly optimize the images' latent representation in the latent space of a pre-trained GAN.
We demonstrate through a series of experiments that our method is capable of anonymizing the identity of the images whilst -- crucially -- better-preserving the facial attributes.
arXiv Detail & Related papers (2023-03-20T17:34:05Z) - Hierarchical Training of Deep Neural Networks Using Early Exiting [42.186536611404165]
Deep neural networks provide state-of-the-art accuracy for vision tasks but they require significant resources for training.
Deep neural networks are trained on cloud servers far from the edge devices that acquire the data.
In this study, a novel hierarchical training method for deep neural networks is proposed that uses early exits in a divided architecture between edge and cloud workers.
arXiv Detail & Related papers (2023-03-04T11:30:16Z) - Unintended memorisation of unique features in neural networks [15.174895411434026]
We show that unique features occurring only once in training data are memorised by discriminative multi-layer perceptrons and convolutional neural networks.
We develop a score estimating a model's sensitivity to a unique feature by comparing the KL divergences of the model's output distributions.
We find that typical strategies to prevent overfitting do not prevent unique feature memorisation.
arXiv Detail & Related papers (2022-05-20T10:48:18Z) - Auditing Privacy Defenses in Federated Learning via Generative Gradient
Leakage [9.83989883339971]
Federated Learning (FL) framework brings privacy benefits to distributed learning systems.
Recent studies have revealed that private information can still be leaked through shared information.
We propose a new type of leakage, i.e., Generative Gradient Leakage (GGL)
arXiv Detail & Related papers (2022-03-29T15:59:59Z) - Robustness Threats of Differential Privacy [70.818129585404]
We experimentally demonstrate that networks, trained with differential privacy, in some settings might be even more vulnerable in comparison to non-private versions.
We study how the main ingredients of differentially private neural networks training, such as gradient clipping and noise addition, affect the robustness of the model.
arXiv Detail & Related papers (2020-12-14T18:59:24Z) - TIPRDC: Task-Independent Privacy-Respecting Data Crowdsourcing Framework
for Deep Learning with Anonymized Intermediate Representations [49.20701800683092]
We present TIPRDC, a task-independent privacy-respecting data crowdsourcing framework with anonymized intermediate representation.
The goal of this framework is to learn a feature extractor that can hide the privacy information from the intermediate representations; while maximally retaining the original information embedded in the raw data for the data collector to accomplish unknown learning tasks.
arXiv Detail & Related papers (2020-05-23T06:21:26Z) - A Privacy-Preserving Distributed Architecture for
Deep-Learning-as-a-Service [68.84245063902908]
This paper introduces a novel distributed architecture for deep-learning-as-a-service.
It is able to preserve the user sensitive data while providing Cloud-based machine and deep learning services.
arXiv Detail & Related papers (2020-03-30T15:12:03Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.