VAFL: a Method of Vertical Asynchronous Federated Learning
- URL: http://arxiv.org/abs/2007.06081v1
- Date: Sun, 12 Jul 2020 20:09:25 GMT
- Title: VAFL: a Method of Vertical Asynchronous Federated Learning
- Authors: Tianyi Chen, Xiao Jin, Yuejiao Sun, and Wotao Yin
- Abstract summary: Horizontal learning (FL) handles multi-client data that share the same set of features.
vertical FL trains a better predictor that combine perturbed from different clients.
- Score: 40.423372614317195
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Horizontal Federated learning (FL) handles multi-client data that share the
same set of features, and vertical FL trains a better predictor that combine
all the features from different clients. This paper targets solving vertical FL
in an asynchronous fashion, and develops a simple FL method. The new method
allows each client to run stochastic gradient algorithms without coordination
with other clients, so it is suitable for intermittent connectivity of clients.
This method further uses a new technique of perturbed local embedding to ensure
data privacy and improve communication efficiency. Theoretically, we present
the convergence rate and privacy level of our method for strongly convex,
nonconvex and even nonsmooth objectives separately. Empirically, we apply our
method to FL on various image and healthcare datasets. The results compare
favorably to centralized and synchronous FL methods.
Related papers
- An Aggregation-Free Federated Learning for Tackling Data Heterogeneity [50.44021981013037]
Federated Learning (FL) relies on the effectiveness of utilizing knowledge from distributed datasets.
Traditional FL methods adopt an aggregate-then-adapt framework, where clients update local models based on a global model aggregated by the server from the previous training round.
We introduce FedAF, a novel aggregation-free FL algorithm.
arXiv Detail & Related papers (2024-04-29T05:55:23Z) - Achieving Linear Speedup in Asynchronous Federated Learning with
Heterogeneous Clients [30.135431295658343]
Federated learning (FL) aims to learn a common global model without exchanging or transferring the data that are stored locally at different clients.
In this paper, we propose an efficient federated learning (AFL) framework called DeFedAvg.
DeFedAvg is the first AFL algorithm that achieves the desirable linear speedup property, which indicates its high scalability.
arXiv Detail & Related papers (2024-02-17T05:22:46Z) - Enhancing Convergence in Federated Learning: A Contribution-Aware
Asynchronous Approach [0.0]
Federated Learning (FL) is a distributed machine learning paradigm that allows clients to train models on their data while preserving their privacy.
FL algorithms, such as Federated Averaging (FedAvg) and its variants, have been shown to converge well in many scenarios.
However, these methods require clients to upload their local updates to the server in a synchronous manner, which can be slow and unreliable in realistic FL settings.
We propose a contribution-aware asynchronous FL method that takes into account the staleness and statistical heterogeneity of the received updates.
arXiv Detail & Related papers (2024-02-16T12:10:53Z) - FedLALR: Client-Specific Adaptive Learning Rates Achieve Linear Speedup
for Non-IID Data [54.81695390763957]
Federated learning is an emerging distributed machine learning method.
We propose a heterogeneous local variant of AMSGrad, named FedLALR, in which each client adjusts its learning rate.
We show that our client-specified auto-tuned learning rate scheduling can converge and achieve linear speedup with respect to the number of clients.
arXiv Detail & Related papers (2023-09-18T12:35:05Z) - User-Centric Federated Learning: Trading off Wireless Resources for
Personalization [18.38078866145659]
In Federated Learning (FL) systems, Statistical Heterogeneousness increases the algorithm convergence time and reduces the generalization performance.
To tackle the above problems without violating the privacy constraints that FL imposes, personalized FL methods have to couple statistically similar clients without directly accessing their data.
In this work, we design user-centric aggregation rules that are based on readily available gradient information and are capable of producing personalized models for each FL client.
Our algorithm outperforms popular personalized FL baselines in terms of average accuracy, worst node performance, and training communication overhead.
arXiv Detail & Related papers (2023-04-25T15:45:37Z) - Federated Learning with Flexible Control [30.65854375019346]
Federated learning (FL) enables distributed model training from local data collected by users.
In distributed systems with constrained resources and potentially high dynamics, e.g., mobile edge networks, the efficiency of FL is an important problem.
We propose FlexFL - an FL algorithm with multiple options that can be adjusted flexibly.
arXiv Detail & Related papers (2022-12-16T14:21:29Z) - FedFM: Anchor-based Feature Matching for Data Heterogeneity in Federated
Learning [91.74206675452888]
We propose a novel method FedFM, which guides each client's features to match shared category-wise anchors.
To achieve higher efficiency and flexibility, we propose a FedFM variant, called FedFM-Lite, where clients communicate with server with fewer synchronization times and communication bandwidth costs.
arXiv Detail & Related papers (2022-10-14T08:11:34Z) - Local Learning Matters: Rethinking Data Heterogeneity in Federated
Learning [61.488646649045215]
Federated learning (FL) is a promising strategy for performing privacy-preserving, distributed learning with a network of clients (i.e., edge devices)
arXiv Detail & Related papers (2021-11-28T19:03:39Z) - Dynamic Attention-based Communication-Efficient Federated Learning [85.18941440826309]
Federated learning (FL) offers a solution to train a global machine learning model.
FL suffers performance degradation when client data distribution is non-IID.
We propose a new adaptive training algorithm $textttAdaFL$ to combat this degradation.
arXiv Detail & Related papers (2021-08-12T14:18:05Z) - Personalized Federated Learning with Moreau Envelopes [16.25105865597947]
Federated learning (FL) is a decentralized and privacy-preserving machine learning technique.
One challenge associated with FL is statistical diversity among clients.
We propose an algorithm for personalized FL (FedFedMe) using envelopes regularized loss function.
arXiv Detail & Related papers (2020-06-16T00:55:23Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.