Mixed Federated Learning: Joint Decentralized and Centralized Learning
- URL: http://arxiv.org/abs/2205.13655v1
- Date: Thu, 26 May 2022 22:22:15 GMT
- Title: Mixed Federated Learning: Joint Decentralized and Centralized Learning
- Authors: Sean Augenstein, Andrew Hard, Lin Ning, Karan Singhal, Satyen Kale,
Kurt Partridge, Rajiv Mathews
- Abstract summary: Federated learning (FL) enables learning from decentralized privacy-sensitive data.
This paper introduces mixed FL, which incorporates an additional loss term calculated at the coordinating server.
- Score: 10.359026922702142
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Federated learning (FL) enables learning from decentralized privacy-sensitive
data, with computations on raw data confined to take place at edge clients.
This paper introduces mixed FL, which incorporates an additional loss term
calculated at the coordinating server (while maintaining FL's private data
restrictions). There are numerous benefits. For example, additional datacenter
data can be leveraged to jointly learn from centralized (datacenter) and
decentralized (federated) training data and better match an expected inference
data distribution. Mixed FL also enables offloading some intensive computations
(e.g., embedding regularization) to the server, greatly reducing communication
and client computation load. For these and other mixed FL use cases, we present
three algorithms: PARALLEL TRAINING, 1-WAY GRADIENT TRANSFER, and 2-WAY
GRADIENT TRANSFER. We state convergence bounds for each, and give intuition on
which are suited to particular mixed FL problems. Finally we perform extensive
experiments on three tasks, demonstrating that mixed FL can blend training data
to achieve an oracle's accuracy on an inference distribution, and can reduce
communication and computation overhead by over 90%. Our experiments confirm
theoretical predictions of how algorithms perform under different mixed FL
problem settings.
Related papers
- A Framework for testing Federated Learning algorithms using an edge-like environment [0.0]
Federated Learning (FL) is a machine learning paradigm in which many clients cooperatively train a single centralized model while keeping their data private and decentralized.
It is non-trivial to accurately evaluate the contributions of local models in global centralized model aggregation.
This is an example of a major challenge in FL, commonly known as data imbalance or class imbalance.
In this work, a framework is proposed and implemented to assess FL algorithms in a more easy and scalable way.
arXiv Detail & Related papers (2024-07-17T19:52:53Z) - Federated Learning with Reduced Information Leakage and Computation [17.069452700698047]
Federated learning (FL) is a distributed learning paradigm that allows multiple decentralized clients to collaboratively learn a common model without sharing local data.
This paper introduces Upcycled-FL, a strategy that applies first-order approximation at every even round of model update.
Under this strategy, half of the FL updates incur no information leakage and require much less computational and transmission costs.
arXiv Detail & Related papers (2023-10-10T06:22:06Z) - Semi-Federated Learning: Convergence Analysis and Optimization of A
Hybrid Learning Framework [70.83511997272457]
We propose a semi-federated learning (SemiFL) paradigm to leverage both the base station (BS) and devices for a hybrid implementation of centralized learning (CL) and FL.
We propose a two-stage algorithm to solve this intractable problem, in which we provide the closed-form solutions to the beamformers.
arXiv Detail & Related papers (2023-10-04T03:32:39Z) - Federated Learning with Privacy-Preserving Ensemble Attention
Distillation [63.39442596910485]
Federated Learning (FL) is a machine learning paradigm where many local nodes collaboratively train a central model while keeping the training data decentralized.
We propose a privacy-preserving FL framework leveraging unlabeled public data for one-way offline knowledge distillation.
Our technique uses decentralized and heterogeneous local data like existing FL approaches, but more importantly, it significantly reduces the risk of privacy leakage.
arXiv Detail & Related papers (2022-10-16T06:44:46Z) - Preserving Privacy in Federated Learning with Ensemble Cross-Domain
Knowledge Distillation [22.151404603413752]
Federated Learning (FL) is a machine learning paradigm where local nodes collaboratively train a central model.
Existing FL methods typically share model parameters or employ co-distillation to address the issue of unbalanced data distribution.
We develop a privacy preserving and communication efficient method in a FL framework with one-shot offline knowledge distillation.
arXiv Detail & Related papers (2022-09-10T05:20:31Z) - Acceleration of Federated Learning with Alleviated Forgetting in Local
Training [61.231021417674235]
Federated learning (FL) enables distributed optimization of machine learning models while protecting privacy.
We propose FedReg, an algorithm to accelerate FL with alleviated knowledge forgetting in the local training stage.
Our experiments demonstrate that FedReg not only significantly improves the convergence rate of FL, especially when the neural network architecture is deep.
arXiv Detail & Related papers (2022-03-05T02:31:32Z) - Sample Selection with Deadline Control for Efficient Federated Learning
on Heterogeneous Clients [8.350621280672891]
Federated Learning (FL) trains a machine learning model on distributed clients without exposing individual data.
We propose FedBalancer, a systematic FL framework that actively selects clients' training samples.
We show that FedBalancer improves the time-to-accuracy performance by 1.224.62x while improving the model accuracy by 1.03.3%.
arXiv Detail & Related papers (2022-01-05T13:35:35Z) - Local Learning Matters: Rethinking Data Heterogeneity in Federated
Learning [61.488646649045215]
Federated learning (FL) is a promising strategy for performing privacy-preserving, distributed learning with a network of clients (i.e., edge devices)
arXiv Detail & Related papers (2021-11-28T19:03:39Z) - Blockchain Assisted Decentralized Federated Learning (BLADE-FL):
Performance Analysis and Resource Allocation [119.19061102064497]
We propose a decentralized FL framework by integrating blockchain into FL, namely, blockchain assisted decentralized federated learning (BLADE-FL)
In a round of the proposed BLADE-FL, each client broadcasts its trained model to other clients, competes to generate a block based on the received models, and then aggregates the models from the generated block before its local training of the next round.
We explore the impact of lazy clients on the learning performance of BLADE-FL, and characterize the relationship among the optimal K, the learning parameters, and the proportion of lazy clients.
arXiv Detail & Related papers (2021-01-18T07:19:08Z) - Improving Accuracy of Federated Learning in Non-IID Settings [11.908715869667445]
Federated Learning (FL) is a decentralized machine learning protocol that allows a set of participating agents to collaboratively train a model without sharing their data.
It has been observed that the performance of FL is closely tied with the local data distributions of agents.
In this work, we identify four simple techniques that can improve the performance of trained models without incurring any additional communication overhead to FL.
arXiv Detail & Related papers (2020-10-14T21:02:14Z) - Over-the-Air Federated Learning from Heterogeneous Data [107.05618009955094]
Federated learning (FL) is a framework for distributed learning of centralized models.
We develop a Convergent OTA FL (COTAF) algorithm which enhances the common local gradient descent (SGD) FL algorithm.
We numerically show that the precoding induced by COTAF notably improves the convergence rate and the accuracy of models trained via OTA FL.
arXiv Detail & Related papers (2020-09-27T08:28:25Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.