Pursuing Overall Welfare in Federated Learning through Sequential Decision Making
- URL: http://arxiv.org/abs/2405.20821v2
- Date: Mon, 18 Nov 2024 11:06:59 GMT
- Title: Pursuing Overall Welfare in Federated Learning through Sequential Decision Making
- Authors: Seok-Ju Hahn, Gi-Soo Kim, Junghye Lee,
- Abstract summary: In traditional federated learning, a single global model cannot perform equally well for all clients.
Our work reveals that existing fairness-aware aggregation strategies can be unified into an online convex optimization framework.
AAggFF achieves better degree of client-level fairness than existing methods in both practical settings.
- Score: 10.377683220196873
- License:
- Abstract: In traditional federated learning, a single global model cannot perform equally well for all clients. Therefore, the need to achieve the client-level fairness in federated system has been emphasized, which can be realized by modifying the static aggregation scheme for updating the global model to an adaptive one, in response to the local signals of the participating clients. Our work reveals that existing fairness-aware aggregation strategies can be unified into an online convex optimization framework, in other words, a central server's sequential decision making process. To enhance the decision making capability, we propose simple and intuitive improvements for suboptimal designs within existing methods, presenting AAggFF. Considering practical requirements, we further subdivide our method tailored for the cross-device and the cross-silo settings, respectively. Theoretical analyses guarantee sublinear regret upper bounds for both settings: $\mathcal{O}(\sqrt{T \log{K}})$ for the cross-device setting, and $\mathcal{O}(K \log{T})$ for the cross-silo setting, with $K$ clients and $T$ federation rounds. Extensive experiments demonstrate that the federated system equipped with AAggFF achieves better degree of client-level fairness than existing methods in both practical settings. Code is available at https://github.com/vaseline555/AAggFF
Related papers
- Federated $\mathcal{X}$-armed Bandit with Flexible Personalisation [3.74142789780782]
This paper introduces a novel approach to personalised federated learning within the $mathcalX$-armed bandit framework.
Our method employs a surrogate objective function that combines individual client preferences with aggregated global knowledge, allowing for a flexible trade-off between personalisation and collective learning.
arXiv Detail & Related papers (2024-09-11T13:19:41Z) - Towards Hyper-parameter-free Federated Learning [1.3682156035049038]
We introduce algorithms for automated scaling of global model updates.
In first algorithm, we establish that a descent-ensuring step-size regime at the clients ensures descent for the server objective.
Second algorithm shows that the average of objective values of sampled clients is a practical and effective substitute for the value server required for computing the scaling factor.
arXiv Detail & Related papers (2024-08-30T09:35:36Z) - Learn What You Need in Personalized Federated Learning [53.83081622573734]
$textitLearn2pFed$ is a novel algorithm-unrolling-based personalized federated learning framework.
We show that $textitLearn2pFed$ significantly outperforms previous personalized federated learning methods.
arXiv Detail & Related papers (2024-01-16T12:45:15Z) - Towards Instance-adaptive Inference for Federated Learning [80.38701896056828]
Federated learning (FL) is a distributed learning paradigm that enables multiple clients to learn a powerful global model by aggregating local training.
In this paper, we present a novel FL algorithm, i.e., FedIns, to handle intra-client data heterogeneity by enabling instance-adaptive inference in the FL framework.
Our experiments show that our FedIns outperforms state-of-the-art FL algorithms, e.g., a 6.64% improvement against the top-performing method with less than 15% communication cost on Tiny-ImageNet.
arXiv Detail & Related papers (2023-08-11T09:58:47Z) - Locally Adaptive Federated Learning [30.19411641685853]
Federated learning is a paradigm of distributed machine learning in which multiple clients coordinate with a central server to learn a model.
Standard federated optimization methods such as Federated Averaging (FedAvg) ensure generalization among the clients.
We propose locally federated learning algorithms, that leverage the local geometric information for each client function.
arXiv Detail & Related papers (2023-07-12T17:02:32Z) - When Computing Power Network Meets Distributed Machine Learning: An
Efficient Federated Split Learning Framework [6.871107511111629]
CPN-FedSL is a Federated Split Learning (FedSL) framework over Computing Power Network (CPN)
We build a dedicated model to capture the basic settings and learning characteristics (e.g., latency, flow, convergence)
arXiv Detail & Related papers (2023-05-22T12:36:52Z) - FedDA: Faster Framework of Local Adaptive Gradient Methods via Restarted
Dual Averaging [104.41634756395545]
Federated learning (FL) is an emerging learning paradigm to tackle massively distributed data.
We propose textbfFedDA, a novel framework for local adaptive gradient methods.
We show that textbfFedDA-MVR is the first adaptive FL algorithm that achieves this rate.
arXiv Detail & Related papers (2023-02-13T05:10:30Z) - $\texttt{FedBC}$: Calibrating Global and Local Models via Federated
Learning Beyond Consensus [66.62731854746856]
In federated learning (FL), the objective of collaboratively learning a global model through aggregation of model updates across devices tends to oppose the goal of personalization via local information.
In this work, we calibrate this tradeoff in a quantitative manner through a multi-criterion-based optimization.
We demonstrate that $texttFedBC$ balances the global and local model test accuracy metrics across a suite datasets.
arXiv Detail & Related papers (2022-06-22T02:42:04Z) - Local Adaptivity in Federated Learning: Convergence and Consistency [25.293584783673413]
Federated learning (FL) framework trains a machine learning model using decentralized data stored at edge client devices by periodically aggregating locally trained models.
We show in both theory and practice that while local adaptive methods can accelerate convergence, they can cause a non-vanishing solution bias.
We propose correction techniques to overcome this inconsistency and complement the local adaptive methods for FL.
arXiv Detail & Related papers (2021-06-04T07:36:59Z) - A Bayesian Federated Learning Framework with Online Laplace
Approximation [144.7345013348257]
Federated learning allows multiple clients to collaboratively learn a globally shared model.
We propose a novel FL framework that uses online Laplace approximation to approximate posteriors on both the client and server side.
We achieve state-of-the-art results on several benchmarks, clearly demonstrating the advantages of the proposed method.
arXiv Detail & Related papers (2021-02-03T08:36:58Z) - Edge-assisted Democratized Learning Towards Federated Analytics [67.44078999945722]
We show the hierarchical learning structure of the proposed edge-assisted democratized learning mechanism, namely Edge-DemLearn.
We also validate Edge-DemLearn as a flexible model training mechanism to build a distributed control and aggregation methodology in regions.
arXiv Detail & Related papers (2020-12-01T11:46:03Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.