Federated Learning and Differential Privacy: Software tools analysis,
the Sherpa.ai FL framework and methodological guidelines for preserving data
privacy
- URL: http://arxiv.org/abs/2007.00914v2
- Date: Tue, 6 Oct 2020 07:39:39 GMT
- Title: Federated Learning and Differential Privacy: Software tools analysis,
the Sherpa.ai FL framework and methodological guidelines for preserving data
privacy
- Authors: Nuria Rodr\'iguez-Barroso, Goran Stipcich, Daniel Jim\'enez-L\'opez,
Jos\'e Antonio Ruiz-Mill\'an, Eugenio Mart\'inez-C\'amara, Gerardo
Gonz\'alez-Seco, M. Victoria Luz\'on, Miguel \'Angel Veganzones, Francisco
Herrera
- Abstract summary: We present the Sherpa.ai Federated Learning framework that is built upon an holistic view of federated learning and differential privacy.
We show how to follow the methodological guidelines with the Sherpa.ai Federated Learning framework by means of a classification and a regression use cases.
- Score: 8.30788601976591
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The high demand of artificial intelligence services at the edges that also
preserve data privacy has pushed the research on novel machine learning
paradigms that fit those requirements. Federated learning has the ambition to
protect data privacy through distributed learning methods that keep the data in
their data silos. Likewise, differential privacy attains to improve the
protection of data privacy by measuring the privacy loss in the communication
among the elements of federated learning. The prospective matching of federated
learning and differential privacy to the challenges of data privacy protection
has caused the release of several software tools that support their
functionalities, but they lack of the needed unified vision for those
techniques, and a methodological workflow that support their use. Hence, we
present the Sherpa.ai Federated Learning framework that is built upon an
holistic view of federated learning and differential privacy. It results from
the study of how to adapt the machine learning paradigm to federated learning,
and the definition of methodological guidelines for developing artificial
intelligence services based on federated learning and differential privacy. We
show how to follow the methodological guidelines with the Sherpa.ai Federated
Learning framework by means of a classification and a regression use cases.
Related papers
- Advancing Personalized Federated Learning: Integrative Approaches with AI for Enhanced Privacy and Customization [0.0]
This paper proposes a novel approach that enhances PFL with cutting-edge AI techniques.
We present a model that boosts the performance of individual client models and ensures robust privacy-preserving mechanisms.
This work paves the way for a new era of truly personalized and privacy-conscious AI systems.
arXiv Detail & Related papers (2025-01-30T07:03:29Z) - Concurrent vertical and horizontal federated learning with fuzzy cognitive maps [1.104960878651584]
This research introduces a novel federated learning framework employing fuzzy cognitive maps.
It is designed to comprehensively address the challenges posed by diverse data distributions and non-identically distributed features.
The results demonstrate the effectiveness of the approach in achieving the desired learning outcomes while maintaining privacy and confidentiality standards.
arXiv Detail & Related papers (2024-12-17T12:11:14Z) - Differentially Private Federated Learning: A Systematic Review [35.13641504685795]
We propose a new taxonomy of differentially private federated learning based on definition and guarantee of various differential privacy models and scenarios.
Our work provide valuable insights into privacy-preserving federated learning and suggest practical directions for future research.
arXiv Detail & Related papers (2024-05-14T03:49:14Z) - A chaotic maps-based privacy-preserving distributed deep learning for
incomplete and Non-IID datasets [1.30536490219656]
Federated Learning is a machine learning approach that enables the training of a deep learning model among several participants with sensitive data.
In this research, the authors employ a secured Federated Learning method with an additional layer of privacy and propose a method for addressing the non-IID challenge.
arXiv Detail & Related papers (2024-02-15T17:49:50Z) - An advanced data fabric architecture leveraging homomorphic encryption
and federated learning [10.779491433438144]
This paper introduces a secure approach for medical image analysis using federated learning and partially homomorphic encryption within a distributed data fabric architecture.
The study demonstrates the method's effectiveness through a case study on pituitary tumor classification, achieving a significant level of accuracy.
arXiv Detail & Related papers (2024-02-15T08:50:36Z) - Exploring Federated Unlearning: Analysis, Comparison, and Insights [101.64910079905566]
federated unlearning enables the selective removal of data from models trained in federated systems.
This paper examines existing federated unlearning approaches, examining their algorithmic efficiency, impact on model accuracy, and effectiveness in preserving privacy.
We propose the OpenFederatedUnlearning framework, a unified benchmark for evaluating federated unlearning methods.
arXiv Detail & Related papers (2023-10-30T01:34:33Z) - PrivacyMind: Large Language Models Can Be Contextual Privacy Protection Learners [81.571305826793]
We introduce Contextual Privacy Protection Language Models (PrivacyMind)
Our work offers a theoretical analysis for model design and benchmarks various techniques.
In particular, instruction tuning with both positive and negative examples stands out as a promising method.
arXiv Detail & Related papers (2023-10-03T22:37:01Z) - A Unified View of Differentially Private Deep Generative Modeling [60.72161965018005]
Data with privacy concerns comes with stringent regulations that frequently prohibited data access and data sharing.
Overcoming these obstacles is key for technological progress in many real-world application scenarios that involve privacy sensitive data.
Differentially private (DP) data publishing provides a compelling solution, where only a sanitized form of the data is publicly released.
arXiv Detail & Related papers (2023-09-27T14:38:16Z) - Privacy-Preserving Graph Machine Learning from Data to Computation: A
Survey [67.7834898542701]
We focus on reviewing privacy-preserving techniques of graph machine learning.
We first review methods for generating privacy-preserving graph data.
Then we describe methods for transmitting privacy-preserved information.
arXiv Detail & Related papers (2023-07-10T04:30:23Z) - Distributed Machine Learning and the Semblance of Trust [66.1227776348216]
Federated Learning (FL) allows the data owner to maintain data governance and perform model training locally without having to share their data.
FL and related techniques are often described as privacy-preserving.
We explain why this term is not appropriate and outline the risks associated with over-reliance on protocols that were not designed with formal definitions of privacy in mind.
arXiv Detail & Related papers (2021-12-21T08:44:05Z) - Non-IID data and Continual Learning processes in Federated Learning: A
long road ahead [58.720142291102135]
Federated Learning is a novel framework that allows multiple devices or institutions to train a machine learning model collaboratively while preserving their data private.
In this work, we formally classify data statistical heterogeneity and review the most remarkable learning strategies that are able to face it.
At the same time, we introduce approaches from other machine learning frameworks, such as Continual Learning, that also deal with data heterogeneity and could be easily adapted to the Federated Learning settings.
arXiv Detail & Related papers (2021-11-26T09:57:11Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.