FedU: A Unified Framework for Federated Multi-Task Learning with
Laplacian Regularization
- URL: http://arxiv.org/abs/2102.07148v1
- Date: Sun, 14 Feb 2021 13:19:43 GMT
- Title: FedU: A Unified Framework for Federated Multi-Task Learning with
Laplacian Regularization
- Authors: Canh T. Dinh, Tung T. Vu, Nguyen H. Tran, Minh N. Dao, Hongyu Zhang
- Abstract summary: Federated multi-task learning (FMTL) has emerged as a natural choice to capture the statistical diversity among the clients in federated learning.
To unleash the FMTL beyond statistical diversity, we formulate a new FMTL FedU using Laplacian regularization.
- Score: 15.238123204624003
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Federated multi-task learning (FMTL) has emerged as a natural choice to
capture the statistical diversity among the clients in federated learning. To
unleash the potential of FMTL beyond statistical diversity, we formulate a new
FMTL problem FedU using Laplacian regularization, which can explicitly leverage
relationships among the clients for multi-task learning. We first show that
FedU provides a unified framework covering a wide range of problems such as
conventional federated learning, personalized federated learning, few-shot
learning, and stratified model learning. We then propose algorithms including
both communication-centralized and decentralized schemes to learn optimal
models of FedU. Theoretically, we show that the convergence rates of both
FedU's algorithms achieve linear speedup for strongly convex and sublinear
speedup of order $1/2$ for nonconvex objectives. While the analysis of FedU is
applicable to both strongly convex and nonconvex loss functions, the
conventional FMTL algorithm MOCHA, which is based on CoCoA framework, is only
applicable to convex case. Experimentally, we verify that FedU outperforms the
vanilla FedAvg, MOCHA, as well as pFedMe and Per-FedAvg in personalized
federated learning.
Related papers
- Federated Learning under Partially Class-Disjoint Data via Manifold Reshaping [64.58402571292723]
We propose a manifold reshaping approach called FedMR to calibrate the feature space of local training.
We conduct extensive experiments on a range of datasets to demonstrate that our FedMR achieves much higher accuracy and better communication efficiency.
arXiv Detail & Related papers (2024-05-29T10:56:13Z) - Unlearning during Learning: An Efficient Federated Machine Unlearning Method [20.82138206063572]
Federated Learning (FL) has garnered significant attention as a distributed machine learning paradigm.
To facilitate the implementation of the right to be forgotten, the concept of federated machine unlearning (FMU) has also emerged.
We introduce FedAU, an innovative and efficient FMU framework aimed at overcoming these limitations.
arXiv Detail & Related papers (2024-05-24T11:53:13Z) - Federated Multi-Objective Learning [22.875284692358683]
We propose a new federated multi-objective learning (FMOL) framework with multiple clients.
Our FMOL framework allows a different set of objective functions across different clients to support a wide range of applications.
For this FMOL framework, we propose two new federated multi-task optimization (FMOO) algorithms called federated multi-gradient descent averaging (FSMGDA) and federated multi-gradient descent averaging (FSMGDA)
arXiv Detail & Related papers (2023-10-15T15:45:51Z) - FedWon: Triumphing Multi-domain Federated Learning Without Normalization [50.49210227068574]
Federated learning (FL) enhances data privacy with collaborative in-situ training on decentralized clients.
However, Federated learning (FL) encounters challenges due to non-independent and identically distributed (non-i.i.d) data.
We propose a novel method called Federated learning Without normalizations (FedWon) to address the multi-domain problem in FL.
arXiv Detail & Related papers (2023-06-09T13:18:50Z) - Federated Multi-Sequence Stochastic Approximation with Local
Hypergradient Estimation [28.83712379658548]
We develop FedMSA, the first federated approximation algorithm for multiple coupled sequences (MSA)
FedMSA enables the provable estimation of hypergradients in BLO and MCO via local client updates.
We provide experiments that support our theory and demonstrate the empirical benefits of FedMSA.
arXiv Detail & Related papers (2023-06-02T16:17:43Z) - Faster Adaptive Federated Learning [84.38913517122619]
Federated learning has attracted increasing attention with the emergence of distributed data.
In this paper, we propose an efficient adaptive algorithm (i.e., FAFED) based on momentum-based variance reduced technique in cross-silo FL.
arXiv Detail & Related papers (2022-12-02T05:07:50Z) - An Expectation-Maximization Perspective on Federated Learning [75.67515842938299]
Federated learning describes the distributed training of models across multiple clients while keeping the data private on-device.
In this work, we view the server-orchestrated federated learning process as a hierarchical latent variable model where the server provides the parameters of a prior distribution over the client-specific model parameters.
We show that with simple Gaussian priors and a hard version of the well known Expectation-Maximization (EM) algorithm, learning in such a model corresponds to FedAvg, the most popular algorithm for the federated learning setting.
arXiv Detail & Related papers (2021-11-19T12:58:59Z) - Achieving Statistical Optimality of Federated Learning: Beyond
Stationary Points [19.891597817559038]
Federated Learning (FL) is a promising framework that has great potentials in privacy preservation and in lowering the computation load at the cloud.
Recent work raised concerns on two methods: (1) their fixed points do not correspond to the stationary points of the original optimization problem, and (2) the common model found might not generalize well locally.
We show, in the general kernel regression setting, that both FedAvg and FedProx converge to the minimax-optimal error rates.
arXiv Detail & Related papers (2021-06-29T09:59:43Z) - Federated Composite Optimization [28.11253930828807]
Federated Learning (FL) is a distributed learning paradigm that scales on-device learning collaboratively and privately.
Standard FL algorithms such as FedAvg are primarily geared towards smooth unconstrained settings.
We propose a new primal-dual algorithm, Federated Dual Averaging (FedDualAvg), which by employing a novel server dual averaging procedure circumvents the curse of primal averaging.
arXiv Detail & Related papers (2020-11-17T06:54:06Z) - Practical One-Shot Federated Learning for Cross-Silo Setting [114.76232507580067]
One-shot federated learning is a promising approach to make federated learning applicable in cross-silo setting.
We propose a practical one-shot federated learning algorithm named FedKT.
By utilizing the knowledge transfer technique, FedKT can be applied to any classification models and can flexibly achieve differential privacy guarantees.
arXiv Detail & Related papers (2020-10-02T14:09:10Z) - FedDANE: A Federated Newton-Type Method [49.9423212899788]
Federated learning aims to jointly learn low statistical models over massively distributed datasets.
We propose FedDANE, an optimization that we adapt from DANE, to handle federated learning.
arXiv Detail & Related papers (2020-01-07T07:44:41Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.