Federated Learning and Differential Privacy: Software tools analysis,
the Sherpa.ai FL framework and methodological guidelines for preserving data
privacy
- URL: http://arxiv.org/abs/2007.00914v2
- Date: Tue, 6 Oct 2020 07:39:39 GMT
- Title: Federated Learning and Differential Privacy: Software tools analysis,
the Sherpa.ai FL framework and methodological guidelines for preserving data
privacy
- Authors: Nuria Rodr\'iguez-Barroso, Goran Stipcich, Daniel Jim\'enez-L\'opez,
Jos\'e Antonio Ruiz-Mill\'an, Eugenio Mart\'inez-C\'amara, Gerardo
Gonz\'alez-Seco, M. Victoria Luz\'on, Miguel \'Angel Veganzones, Francisco
Herrera
- Abstract summary: We present the Sherpa.ai Federated Learning framework that is built upon an holistic view of federated learning and differential privacy.
We show how to follow the methodological guidelines with the Sherpa.ai Federated Learning framework by means of a classification and a regression use cases.
- Score: 8.30788601976591
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The high demand of artificial intelligence services at the edges that also
preserve data privacy has pushed the research on novel machine learning
paradigms that fit those requirements. Federated learning has the ambition to
protect data privacy through distributed learning methods that keep the data in
their data silos. Likewise, differential privacy attains to improve the
protection of data privacy by measuring the privacy loss in the communication
among the elements of federated learning. The prospective matching of federated
learning and differential privacy to the challenges of data privacy protection
has caused the release of several software tools that support their
functionalities, but they lack of the needed unified vision for those
techniques, and a methodological workflow that support their use. Hence, we
present the Sherpa.ai Federated Learning framework that is built upon an
holistic view of federated learning and differential privacy. It results from
the study of how to adapt the machine learning paradigm to federated learning,
and the definition of methodological guidelines for developing artificial
intelligence services based on federated learning and differential privacy. We
show how to follow the methodological guidelines with the Sherpa.ai Federated
Learning framework by means of a classification and a regression use cases.
Related papers
- Towards Split Learning-based Privacy-Preserving Record Linkage [49.1574468325115]
Split Learning has been introduced to facilitate applications where user data privacy is a requirement.
In this paper, we investigate the potentials of Split Learning for Privacy-Preserving Record Matching.
arXiv Detail & Related papers (2024-09-02T09:17:05Z) - Differentially Private Federated Learning: A Systematic Review [35.13641504685795]
We propose a new taxonomy of differentially private federated learning based on definition and guarantee of various differential privacy models and scenarios.
Our work provide valuable insights into privacy-preserving federated learning and suggest practical directions for future research.
arXiv Detail & Related papers (2024-05-14T03:49:14Z) - State-of-the-Art Approaches to Enhancing Privacy Preservation of Machine Learning Datasets: A Survey [0.0]
This paper examines the evolving landscape of machine learning (ML) and its profound impact across various sectors.
It focuses on the emerging field of Privacy-preserving Machine Learning (PPML)
As ML applications become increasingly integral to industries like telecommunications, financial technology, and surveillance, they raise significant privacy concerns.
arXiv Detail & Related papers (2024-02-25T17:31:06Z) - A chaotic maps-based privacy-preserving distributed deep learning for
incomplete and Non-IID datasets [1.30536490219656]
Federated Learning is a machine learning approach that enables the training of a deep learning model among several participants with sensitive data.
In this research, the authors employ a secured Federated Learning method with an additional layer of privacy and propose a method for addressing the non-IID challenge.
arXiv Detail & Related papers (2024-02-15T17:49:50Z) - An advanced data fabric architecture leveraging homomorphic encryption
and federated learning [10.779491433438144]
This paper introduces a secure approach for medical image analysis using federated learning and partially homomorphic encryption within a distributed data fabric architecture.
The study demonstrates the method's effectiveness through a case study on pituitary tumor classification, achieving a significant level of accuracy.
arXiv Detail & Related papers (2024-02-15T08:50:36Z) - PrivacyMind: Large Language Models Can Be Contextual Privacy Protection Learners [81.571305826793]
We introduce Contextual Privacy Protection Language Models (PrivacyMind)
Our work offers a theoretical analysis for model design and benchmarks various techniques.
In particular, instruction tuning with both positive and negative examples stands out as a promising method.
arXiv Detail & Related papers (2023-10-03T22:37:01Z) - A Unified View of Differentially Private Deep Generative Modeling [60.72161965018005]
Data with privacy concerns comes with stringent regulations that frequently prohibited data access and data sharing.
Overcoming these obstacles is key for technological progress in many real-world application scenarios that involve privacy sensitive data.
Differentially private (DP) data publishing provides a compelling solution, where only a sanitized form of the data is publicly released.
arXiv Detail & Related papers (2023-09-27T14:38:16Z) - Privacy-Preserving Graph Machine Learning from Data to Computation: A
Survey [67.7834898542701]
We focus on reviewing privacy-preserving techniques of graph machine learning.
We first review methods for generating privacy-preserving graph data.
Then we describe methods for transmitting privacy-preserved information.
arXiv Detail & Related papers (2023-07-10T04:30:23Z) - Distributed Machine Learning and the Semblance of Trust [66.1227776348216]
Federated Learning (FL) allows the data owner to maintain data governance and perform model training locally without having to share their data.
FL and related techniques are often described as privacy-preserving.
We explain why this term is not appropriate and outline the risks associated with over-reliance on protocols that were not designed with formal definitions of privacy in mind.
arXiv Detail & Related papers (2021-12-21T08:44:05Z) - Non-IID data and Continual Learning processes in Federated Learning: A
long road ahead [58.720142291102135]
Federated Learning is a novel framework that allows multiple devices or institutions to train a machine learning model collaboratively while preserving their data private.
In this work, we formally classify data statistical heterogeneity and review the most remarkable learning strategies that are able to face it.
At the same time, we introduce approaches from other machine learning frameworks, such as Continual Learning, that also deal with data heterogeneity and could be easily adapted to the Federated Learning settings.
arXiv Detail & Related papers (2021-11-26T09:57:11Z) - A Review of Privacy-preserving Federated Learning for the
Internet-of-Things [3.3517146652431378]
This work reviews federated learning as an approach for performing machine learning on distributed data.
We aim to protect the privacy of user-generated data as well as reducing communication costs associated with data transfer.
We identify the strengths and weaknesses of different methods applied to federated learning.
arXiv Detail & Related papers (2020-04-24T15:27:23Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.