Scatterbrained: A flexible and expandable pattern for decentralized
machine learning
- URL: http://arxiv.org/abs/2112.07718v1
- Date: Tue, 14 Dec 2021 19:39:35 GMT
- Title: Scatterbrained: A flexible and expandable pattern for decentralized
machine learning
- Authors: Miller Wilt, Jordan K. Matelsky, Andrew S. Gearhart
- Abstract summary: Federated machine learning is a technique for training a model across multiple devices without exchanging data between them.
We suggest a flexible framework for decentralizing the federated learning pattern, and provide an open-source, reference implementation compatible with PyTorch.
- Score: 1.2891210250935146
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Federated machine learning is a technique for training a model across
multiple devices without exchanging data between them. Because data remains
local to each compute node, federated learning is well-suited for use-cases in
fields where data is carefully controlled, such as medicine, or in domains with
bandwidth constraints. One weakness of this approach is that most federated
learning tools rely upon a central server to perform workload delegation and to
produce a single shared model. Here, we suggest a flexible framework for
decentralizing the federated learning pattern, and provide an open-source,
reference implementation compatible with PyTorch.
Related papers
- Update Selective Parameters: Federated Machine Unlearning Based on Model Explanation [46.86767774669831]
We propose a more effective and efficient federated unlearning scheme based on the concept of model explanation.
We select the most influential channels within an already-trained model for the data that need to be unlearned.
arXiv Detail & Related papers (2024-06-18T11:43:20Z) - Scalable Collaborative Learning via Representation Sharing [53.047460465980144]
Federated learning (FL) and Split Learning (SL) are two frameworks that enable collaborative learning while keeping the data private (on device)
In FL, each data holder trains a model locally and releases it to a central server for aggregation.
In SL, the clients must release individual cut-layer activations (smashed data) to the server and wait for its response (during both inference and back propagation).
In this work, we present a novel approach for privacy-preserving machine learning, where the clients collaborate via online knowledge distillation using a contrastive loss.
arXiv Detail & Related papers (2022-11-20T10:49:22Z) - FedGrad: Optimisation in Decentralised Machine Learning [0.0]
Federated learning is a machine learning paradigm where we aim to train machine learning models in a distributed fashion.
We propose yet another adaptive federated optimization method and some other ideas in the field of federated learning.
arXiv Detail & Related papers (2022-11-07T15:07:56Z) - Confederated Learning: Federated Learning with Decentralized Edge
Servers [42.766372620288585]
Federated learning (FL) is an emerging machine learning paradigm that allows to accomplish model training without aggregating data at a central server.
We propose a ConFederated Learning (CFL) framework, in which each server is connected with an individual set of devices.
The proposed algorithm employs a random scheduling policy which randomly selects a subset of devices to access their respective servers at each iteration.
arXiv Detail & Related papers (2022-05-30T07:56:58Z) - FedILC: Weighted Geometric Mean and Invariant Gradient Covariance for
Federated Learning on Non-IID Data [69.0785021613868]
Federated learning is a distributed machine learning approach which enables a shared server model to learn by aggregating the locally-computed parameter updates with the training data from spatially-distributed client silos.
We propose the Federated Invariant Learning Consistency (FedILC) approach, which leverages the gradient covariance and the geometric mean of Hessians to capture both inter-silo and intra-silo consistencies.
This is relevant to various fields such as medical healthcare, computer vision, and the Internet of Things (IoT)
arXiv Detail & Related papers (2022-05-19T03:32:03Z) - FLHub: a Federated Learning model sharing service [0.7614628596146599]
We propose Federated Learning Hub (FLHub) as a sharing service for machine learning models.
FLHub allows users to upload, download, and contribute the model developed by other developers similarly to GitHub.
We demonstrate that a forked model can finish training faster than the existing model and that learning progressed more quickly for each federated round.
arXiv Detail & Related papers (2022-02-14T06:02:55Z) - RelaySum for Decentralized Deep Learning on Heterogeneous Data [71.36228931225362]
In decentralized machine learning, workers compute model updates on their local data.
Because the workers only communicate with few neighbors without central coordination, these updates propagate progressively over the network.
This paradigm enables distributed training on networks without all-to-all connectivity, helping to protect data privacy as well as to reduce the communication cost of distributed training in data centers.
arXiv Detail & Related papers (2021-10-08T14:55:32Z) - The Effect of Training Parameters and Mechanisms on Decentralized
Federated Learning based on MNIST Dataset [0.0]
We introduce the notion of Decentralized Federated Learning (DFL)
All experiments are run on the MNIST handwritten digits dataset.
We observe failures in training when the variance between model weights is too large.
arXiv Detail & Related papers (2021-08-07T19:37:43Z) - Clustered Federated Learning via Generalized Total Variation
Minimization [83.26141667853057]
We study optimization methods to train local (or personalized) models for local datasets with a decentralized network structure.
Our main conceptual contribution is to formulate federated learning as total variation minimization (GTV)
Our main algorithmic contribution is a fully decentralized federated learning algorithm.
arXiv Detail & Related papers (2021-05-26T18:07:19Z) - Multi-Center Federated Learning [62.57229809407692]
This paper proposes a novel multi-center aggregation mechanism for federated learning.
It learns multiple global models from the non-IID user data and simultaneously derives the optimal matching between users and centers.
Our experimental results on benchmark datasets show that our method outperforms several popular federated learning methods.
arXiv Detail & Related papers (2020-05-03T09:14:31Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.