Decentralized Federated Learning Preserves Model and Data Privacy
- URL: http://arxiv.org/abs/2102.00880v1
- Date: Mon, 1 Feb 2021 14:38:54 GMT
- Title: Decentralized Federated Learning Preserves Model and Data Privacy
- Authors: Thorsten Wittkopp and Alexander Acker
- Abstract summary: We propose a fully decentralized approach, which allows to share knowledge between trained models.
Students are trained on the output of their teachers via synthetically generated input data.
The results show that an untrained student model, trained on the teachers output reaches comparable F1-scores as the teacher.
- Score: 77.454688257702
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: The increasing complexity of IT systems requires solutions, that support
operations in case of failure. Therefore, Artificial Intelligence for System
Operations (AIOps) is a field of research that is becoming increasingly
focused, both in academia and industry. One of the major issues of this area is
the lack of access to adequately labeled data, which is majorly due to legal
protection regulations or industrial confidentiality. Methods to mitigate this
stir from the area of federated learning, whereby no direct access to training
data is required. Original approaches utilize a central instance to perform the
model synchronization by periodical aggregation of all model parameters.
However, there are many scenarios where trained models cannot be published
since its either confidential knowledge or training data could be reconstructed
from them. Furthermore the central instance needs to be trusted and is a single
point of failure. As a solution, we propose a fully decentralized approach,
which allows to share knowledge between trained models. Neither original
training data nor model parameters need to be transmitted. The concept relies
on teacher and student roles that are assigned to the models, whereby students
are trained on the output of their teachers via synthetically generated input
data. We conduct a case study on log anomaly detection. The results show that
an untrained student model, trained on the teachers output reaches comparable
F1-scores as the teacher. In addition, we demonstrate that our method allows
the synchronization of several models trained on different distinct training
data subsets.
Related papers
- Update Selective Parameters: Federated Machine Unlearning Based on Model Explanation [46.86767774669831]
We propose a more effective and efficient federated unlearning scheme based on the concept of model explanation.
We select the most influential channels within an already-trained model for the data that need to be unlearned.
arXiv Detail & Related papers (2024-06-18T11:43:20Z) - Federated Learning with Projected Trajectory Regularization [65.6266768678291]
Federated learning enables joint training of machine learning models from distributed clients without sharing their local data.
One key challenge in federated learning is to handle non-identically distributed data across the clients.
We propose a novel federated learning framework with projected trajectory regularization (FedPTR) for tackling the data issue.
arXiv Detail & Related papers (2023-12-22T02:12:08Z) - Fantastic Gains and Where to Find Them: On the Existence and Prospect of
General Knowledge Transfer between Any Pretrained Model [74.62272538148245]
We show that for arbitrary pairings of pretrained models, one model extracts significant data context unavailable in the other.
We investigate if it is possible to transfer such "complementary" knowledge from one model to another without performance degradation.
arXiv Detail & Related papers (2023-10-26T17:59:46Z) - A Survey on Class Imbalance in Federated Learning [6.632451878730774]
Federated learning allows multiple client devices in a network to jointly train a machine learning model without direct exposure of clients' data.
It has been found that models trained with federated learning usually have worse performance than their counterparts trained in the standard centralized learning mode.
arXiv Detail & Related papers (2023-03-21T08:34:23Z) - Dataless Knowledge Fusion by Merging Weights of Language Models [51.8162883997512]
Fine-tuning pre-trained language models has become the prevalent paradigm for building downstream NLP models.
This creates a barrier to fusing knowledge across individual models to yield a better single model.
We propose a dataless knowledge fusion method that merges models in their parameter space.
arXiv Detail & Related papers (2022-12-19T20:46:43Z) - Synthetic Model Combination: An Instance-wise Approach to Unsupervised
Ensemble Learning [92.89846887298852]
Consider making a prediction over new test data without any opportunity to learn from a training set of labelled data.
Give access to a set of expert models and their predictions alongside some limited information about the dataset used to train them.
arXiv Detail & Related papers (2022-10-11T10:20:31Z) - Application of Federated Learning in Building a Robust COVID-19 Chest
X-ray Classification Model [0.0]
Federated Learning (FL) helps AI models to generalize better without moving all the data to a central server.
We trained a deep learning model to solve a binary classification problem of predicting the presence or absence of COVID-19.
arXiv Detail & Related papers (2022-04-22T05:21:50Z) - Decentralized Federated Learning via Mutual Knowledge Transfer [37.5341683644709]
Decentralized federated learning (DFL) is a problem in the Internet of things (IoT) systems.
We propose a mutual knowledge transfer (Def-KT) algorithm where local clients fuse models by transferring their learnt knowledge to each other.
Our experiments on the MNIST, Fashion-MNIST, and CIFAR10 datasets reveal datasets that the proposed Def-KT algorithm significantly outperforms the baseline DFL methods.
arXiv Detail & Related papers (2020-12-24T01:43:53Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.