A chaotic maps-based privacy-preserving distributed deep learning for
incomplete and Non-IID datasets
- URL: http://arxiv.org/abs/2402.10145v1
- Date: Thu, 15 Feb 2024 17:49:50 GMT
- Title: A chaotic maps-based privacy-preserving distributed deep learning for
incomplete and Non-IID datasets
- Authors: Irina Ar\'evalo and Jose L. Salmeron
- Abstract summary: Federated Learning is a machine learning approach that enables the training of a deep learning model among several participants with sensitive data.
In this research, the authors employ a secured Federated Learning method with an additional layer of privacy and propose a method for addressing the non-IID challenge.
- Score: 1.30536490219656
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Federated Learning is a machine learning approach that enables the training
of a deep learning model among several participants with sensitive data that
wish to share their own knowledge without compromising the privacy of their
data. In this research, the authors employ a secured Federated Learning method
with an additional layer of privacy and proposes a method for addressing the
non-IID challenge. Moreover, differential privacy is compared with
chaotic-based encryption as layer of privacy. The experimental approach
assesses the performance of the federated deep learning model with differential
privacy using both IID and non-IID data. In each experiment, the Federated
Learning process improves the average performance metrics of the deep neural
network, even in the case of non-IID data.
Related papers
- Masked Differential Privacy [64.32494202656801]
We propose an effective approach called masked differential privacy (DP), which allows for controlling sensitive regions where differential privacy is applied.
Our method operates selectively on data and allows for defining non-sensitive-temporal regions without DP application or combining differential privacy with other privacy techniques within data samples.
arXiv Detail & Related papers (2024-10-22T15:22:53Z) - Privacy-preserving Quantification of Non-IID Degree in Federated Learning [22.194684042923406]
Federated learning (FL) offers a privacy-preserving approach to machine learning for multiple collaborators without sharing raw data.
The existence of non-independent and non-identically distributed (non-IID) datasets across different clients presents a significant challenge to FL.
This paper proposes a quantitative definition of the non-IID degree in the federated environment by employing the cumulative distribution function.
arXiv Detail & Related papers (2024-06-14T03:08:53Z) - Approximate Gradient Coding for Privacy-Flexible Federated Learning with Non-IID Data [9.984630251008868]
This work focuses on the challenges of non-IID data and stragglers/dropouts in federated learning.
We introduce and explore a privacy-flexible paradigm that models parts of the clients' local data as non-private.
arXiv Detail & Related papers (2024-04-04T15:29:50Z) - A Unified View of Differentially Private Deep Generative Modeling [60.72161965018005]
Data with privacy concerns comes with stringent regulations that frequently prohibited data access and data sharing.
Overcoming these obstacles is key for technological progress in many real-world application scenarios that involve privacy sensitive data.
Differentially private (DP) data publishing provides a compelling solution, where only a sanitized form of the data is publicly released.
arXiv Detail & Related papers (2023-09-27T14:38:16Z) - FEDIC: Federated Learning on Non-IID and Long-Tailed Data via Calibrated
Distillation [54.2658887073461]
Dealing with non-IID data is one of the most challenging problems for federated learning.
This paper studies the joint problem of non-IID and long-tailed data in federated learning and proposes a corresponding solution called Federated Ensemble Distillation with Imbalance (FEDIC)
FEDIC uses model ensemble to take advantage of the diversity of models trained on non-IID data.
arXiv Detail & Related papers (2022-04-30T06:17:36Z) - Non-IID data and Continual Learning processes in Federated Learning: A
long road ahead [58.720142291102135]
Federated Learning is a novel framework that allows multiple devices or institutions to train a machine learning model collaboratively while preserving their data private.
In this work, we formally classify data statistical heterogeneity and review the most remarkable learning strategies that are able to face it.
At the same time, we introduce approaches from other machine learning frameworks, such as Continual Learning, that also deal with data heterogeneity and could be easily adapted to the Federated Learning settings.
arXiv Detail & Related papers (2021-11-26T09:57:11Z) - Federated Learning on Non-IID Data: A Survey [11.431837357827396]
Federated learning is an emerging distributed machine learning framework for privacy preservation.
Models trained in federated learning usually have worse performance than those trained in the standard centralized learning mode.
arXiv Detail & Related papers (2021-06-12T19:45:35Z) - On Deep Learning with Label Differential Privacy [54.45348348861426]
We study the multi-class classification setting where the labels are considered sensitive and ought to be protected.
We propose a new algorithm for training deep neural networks with label differential privacy, and run evaluations on several datasets.
arXiv Detail & Related papers (2021-02-11T15:09:06Z) - Quasi-Global Momentum: Accelerating Decentralized Deep Learning on
Heterogeneous Data [77.88594632644347]
Decentralized training of deep learning models is a key element for enabling data privacy and on-device learning over networks.
In realistic learning scenarios, the presence of heterogeneity across different clients' local datasets poses an optimization challenge.
We propose a novel momentum-based method to mitigate this decentralized training difficulty.
arXiv Detail & Related papers (2021-02-09T11:27:14Z) - Federated Learning and Differential Privacy: Software tools analysis,
the Sherpa.ai FL framework and methodological guidelines for preserving data
privacy [8.30788601976591]
We present the Sherpa.ai Federated Learning framework that is built upon an holistic view of federated learning and differential privacy.
We show how to follow the methodological guidelines with the Sherpa.ai Federated Learning framework by means of a classification and a regression use cases.
arXiv Detail & Related papers (2020-07-02T06:47:35Z) - Anonymizing Data for Privacy-Preserving Federated Learning [3.3673553810697827]
We propose the first syntactic approach for offering privacy in the context of federated learning.
Our approach aims to maximize utility or model performance, while supporting a defensible level of privacy.
We perform a comprehensive empirical evaluation on two important problems in the healthcare domain, using real-world electronic health data of 1 million patients.
arXiv Detail & Related papers (2020-02-21T02:30:16Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.