A Novel Optimized Asynchronous Federated Learning Framework
- URL: http://arxiv.org/abs/2111.09487v1
- Date: Thu, 18 Nov 2021 02:52:49 GMT
- Title: A Novel Optimized Asynchronous Federated Learning Framework
- Authors: Zhicheng Zhou, Hailong Chen, Kunhua Li, Fei Hu, Bingjie Yan, Jieren
Cheng, Xuyan Wei, Bernie Liu, Xiulai Li, Fuwen Chen, Yongji Sui
- Abstract summary: This paper proposes a novel Asynchronous Federated Learning framework VAFL.
VAFL can reduce the communication times about 51.02% with 48.23% average communication compression rate.
- Score: 1.7541806468876109
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Federated Learning (FL) since proposed has been applied in many fields, such
as credit assessment, medical, etc. Because of the difference in the network or
computing resource, the clients may not update their gradients at the same time
that may take a lot of time to wait or idle. That's why Asynchronous Federated
Learning (AFL) method is needed. The main bottleneck in AFL is communication.
How to find a balance between the model performance and the communication cost
is a challenge in AFL. This paper proposed a novel AFL framework VAFL. And we
verified the performance of the algorithm through sufficient experiments. The
experiments show that VAFL can reduce the communication times about 51.02\%
with 48.23\% average communication compression rate and allow the model to be
converged faster. The code is available at
\url{https://github.com/RobAI-Lab/VAFL}
Related papers
- Analytic Federated Learning [34.15482252496494]
We introduce analytic federated learning (AFL), a new training paradigm that brings analytical (i.e., closed-form) solutions to the federated learning (FL) community.
Our AFL draws inspiration from analytic learning -- a gradient-free technique that trains neural networks with analytical solutions in one epoch.
We conduct experiments across various FL settings including extremely non-IID ones, and scenarios with a large number of clients.
arXiv Detail & Related papers (2024-05-25T13:58:38Z) - Communication-Efficient Vertical Federated Learning with Limited
Overlapping Samples [34.576230628844506]
We propose a vertical federated learning (VFL) framework called textbfone-shot VFL.
In our proposed framework, the clients only need to communicate with the server once or only a few times.
Our methods can improve the accuracy by more than 46.5% and reduce the communication cost by more than 330$times$ compared with state-of-the-art VFL methods.
arXiv Detail & Related papers (2023-03-28T19:30:23Z) - Enhancing Self-Consistency and Performance of Pre-Trained Language
Models through Natural Language Inference [72.61732440246954]
Large pre-trained language models often lack logical consistency across test inputs.
We propose a framework, ConCoRD, for boosting the consistency and accuracy of pre-trained NLP models.
We show that ConCoRD consistently boosts accuracy and consistency of off-the-shelf closed-book QA and VQA models.
arXiv Detail & Related papers (2022-11-21T21:58:30Z) - Content Popularity Prediction Based on Quantized Federated Bayesian
Learning in Fog Radio Access Networks [76.16527095195893]
We investigate the content popularity prediction problem in cache-enabled fog radio access networks (F-RANs)
In order to predict the content popularity with high accuracy and low complexity, we propose a Gaussian process based regressor to model the content request pattern.
We utilize Bayesian learning to train the model parameters, which is robust to overfitting.
arXiv Detail & Related papers (2022-06-23T03:05:12Z) - Killing Two Birds with One Stone:Efficient and Robust Training of Face
Recognition CNNs by Partial FC [66.71660672526349]
We propose a sparsely updating variant of the Fully Connected (FC) layer, named Partial FC (PFC)
In each iteration, positive class centers and a random subset of negative class centers are selected to compute the margin-based softmax loss.
The computing requirement, the probability of inter-class conflict, and the frequency of passive update on tail class centers, are dramatically reduced.
arXiv Detail & Related papers (2022-03-28T14:33:21Z) - Video and Text Matching with Conditioned Embeddings [81.81028089100727]
We present a method for matching a text sentence from a given corpus to a given video clip and vice versa.
In this work, we encode the dataset data in a way that takes into account the query's relevant information.
We show that our conditioned representation can be transferred to video-guided machine translation, where we improved the current results on VATEX.
arXiv Detail & Related papers (2021-10-21T17:31:50Z) - AsySQN: Faster Vertical Federated Learning Algorithms with Better
Computation Resource Utilization [159.75564904944707]
We propose an asynchronous quasi-Newton (AsySQN) framework for vertical federated learning (VFL)
The proposed algorithms make descent steps scaled by approximate without calculating the inverse Hessian matrix explicitly.
We show that the adopted asynchronous computation can make better use of the computation resource.
arXiv Detail & Related papers (2021-09-26T07:56:10Z) - Anarchic Federated Learning [9.440407984695904]
We propose a new paradigm in federated learning called Anarchic Federated Learning'' (AFL)
In AFL, each worker has complete freedom to choose i) when to participate in FL, and ii) the number of local steps to perform in each round based on its current situation.
We propose two Anarchic FedAvg-like algorithms with two-sided learning rates for both cross-device and cross-silo settings.
arXiv Detail & Related papers (2021-08-23T00:38:37Z) - CSAFL: A Clustered Semi-Asynchronous Federated Learning Framework [14.242716751043533]
Federated learning (FL) is an emerging distributed machine learning paradigm that protects privacy and tackles the problem of isolated data islands.
There are two main communication strategies of FL: synchronous FL and asynchronous FL.
We propose a clustered semi-asynchronous federated learning framework.
arXiv Detail & Related papers (2021-04-16T15:51:02Z) - InsertGNN: Can Graph Neural Networks Outperform Humans in TOEFL Sentence
Insertion Problem? [66.70154236519186]
Sentence insertion is a delicate but fundamental NLP problem.
Current approaches in sentence ordering, text coherence, and question answering (QA) are neither suitable nor good at solving it.
We propose InsertGNN, a model that represents the problem as a graph and adopts the graph Neural Network (GNN) to learn the connection between sentences.
arXiv Detail & Related papers (2021-03-28T06:50:31Z) - LINDT: Tackling Negative Federated Learning with Local Adaptation [18.33409148798824]
We propose a novel framework called LINDT for tackling NFL in run-time.
We introduce a metric for detecting NFL from the server.
Experiment results show that the proposed approach can significantly improve the performance of FL on local data.
arXiv Detail & Related papers (2020-11-23T01:31:18Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.