Hire When You Need to: Gradual Participant Recruitment for Auction-based
Federated Learning
- URL: http://arxiv.org/abs/2310.02651v2
- Date: Tue, 19 Dec 2023 06:44:03 GMT
- Title: Hire When You Need to: Gradual Participant Recruitment for Auction-based
Federated Learning
- Authors: Xavier Tan and Han Yu
- Abstract summary: We propose a Gradual Participant Selection scheme for Federated Learning (GPS-AFL)
GPS-AFL gradually selects the required DOs over multiple rounds of training as more information is revealed through repeated interactions.
It is designed to strike a balance between cost saving and performance enhancement, while mitigating the drawbacks of selection bias in reputation-based FL.
- Score: 16.83897148104
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: The success of Federated Learning (FL) depends on the quantity and quality of
the data owners (DOs) as well as their motivation to join FL model training.
Reputation-based FL participant selection methods have been proposed. However,
they still face the challenges of the cold start problem and potential
selection bias towards highly reputable DOs. Such a bias can result in lower
reputation DOs being prematurely excluded from future FL training rounds,
thereby reducing the diversity of training data and the generalizability of the
resulting models. To address these challenges, we propose the Gradual
Participant Selection scheme for Auction-based Federated Learning (GPS-AFL).
Unlike existing AFL incentive mechanisms which generally assume that all DOs
required for an FL task must be selected in one go, GPS-AFL gradually selects
the required DOs over multiple rounds of training as more information is
revealed through repeated interactions. It is designed to strike a balance
between cost saving and performance enhancement, while mitigating the drawbacks
of selection bias in reputation-based FL. Extensive experiments based on
real-world datasets demonstrate the significant advantages of GPS-AFL, which
reduces costs by 33.65% and improved total utility by 2.91%, on average
compared to the best-performing state-of-the-art approach.
Related papers
- Agent-oriented Joint Decision Support for Data Owners in Auction-based Federated Learning [32.6997332038178]
Auction-based Federated Learning (AFL) has attracted extensive research interest due to its ability to motivate data owners (DOs) to join FL through economic means.
We propose a first-of-its-kind agent-oriented joint Pricing, Acceptance and Sub-delegation decision support approach for data owners in AFL (PAS-AFL)
It is the first to enable each DO to take on multiple FL tasks simultaneously to earn higher income for DOs and enhance the throughput of FL tasks in the AFL ecosystem.
arXiv Detail & Related papers (2024-05-09T02:35:46Z) - Ranking-based Client Selection with Imitation Learning for Efficient Federated Learning [20.412469498888292]
Federated Learning (FL) enables multiple devices to collaboratively train a shared model.
The selection of participating devices in each training round critically affects both the model performance and training efficiency.
We introduce a novel device selection solution called FedRank, which is an end-to-end, ranking-based approach.
arXiv Detail & Related papers (2024-05-07T08:44:29Z) - Multi-Session Budget Optimization for Forward Auction-based Federated
Learning [17.546044136396468]
Auction-based Federated Learning (AFL) has emerged as an important research field in recent years.
We propose the Multi-session Budget Optimization Strategy for forward Auction-based Federated Learning (MultiBOS-AFL)
Based on hierarchical reinforcement learning, MultiBOS-AFL jointly optimize inter-session budget pacing and intra-session bidding for AFL MUs.
arXiv Detail & Related papers (2023-11-21T11:57:41Z) - Stabilizing RLHF through Advantage Model and Selective Rehearsal [57.504894664689]
Large Language Models (LLMs) have revolutionized natural language processing, yet aligning these models with human values and preferences remains a significant challenge.
This challenge is characterized by various instabilities, such as reward hacking and catastrophic forgetting.
We propose two innovations to stabilize RLHF training: 1) Advantage Model, which directly models advantage score and regulates score distributions across tasks to prevent reward hacking; and 2) Selective Rehearsal, which mitigates catastrophic forgetting by strategically selecting data for PPO training and knowledge rehearsing.
arXiv Detail & Related papers (2023-09-18T23:06:32Z) - FedABC: Targeting Fair Competition in Personalized Federated Learning [76.9646903596757]
Federated learning aims to collaboratively train models without accessing their client's local private data.
We propose a novel and generic PFL framework termed Federated Averaging via Binary Classification, dubbed FedABC.
In particular, we adopt the one-vs-all'' training strategy in each client to alleviate the unfair competition between classes.
arXiv Detail & Related papers (2023-02-15T03:42:59Z) - A Survey on Participant Selection for Federated Learning in Mobile
Networks [47.88372677863646]
Federated Learning (FL) is an efficient distributed machine learning paradigm that employs private datasets in a privacy-preserving manner.
Due to limited communication bandwidth and unstable availability of such devices in a mobile network, only a fraction of end devices can be selected in each round.
arXiv Detail & Related papers (2022-07-08T04:22:48Z) - Federated Robustness Propagation: Sharing Adversarial Robustness in
Federated Learning [98.05061014090913]
Federated learning (FL) emerges as a popular distributed learning schema that learns from a set of participating users without requiring raw data to be shared.
adversarial training (AT) provides a sound solution for centralized learning, extending its usage for FL users has imposed significant challenges.
We show that existing FL techniques cannot effectively propagate adversarial robustness among non-iid users.
We propose a simple yet effective propagation approach that transfers robustness through carefully designed batch-normalization statistics.
arXiv Detail & Related papers (2021-06-18T15:52:33Z) - A Principled Approach to Data Valuation for Federated Learning [73.19984041333599]
Federated learning (FL) is a popular technique to train machine learning (ML) models on decentralized data sources.
The Shapley value (SV) defines a unique payoff scheme that satisfies many desiderata for a data value notion.
This paper proposes a variant of the SV amenable to FL, which we call the federated Shapley value.
arXiv Detail & Related papers (2020-09-14T04:37:54Z) - Salvaging Federated Learning by Local Adaptation [26.915147034955925]
Federated learning (FL) is a heavily promoted approach for training ML models on sensitive data.
We look at FL from the emphlocal viewpoint of an individual participant and ask: do participants have an incentive to participate in FL?
We show that on standard tasks such as next-word prediction, many participants gain no benefit from FL because the federated model is less accurate on their data than the models they can train locally on their own.
We evaluate three techniques for local adaptation of federated models: fine-tuning, multi-task learning, and knowledge distillation.
arXiv Detail & Related papers (2020-02-12T01:56:16Z) - Prophet: Proactive Candidate-Selection for Federated Learning by
Predicting the Qualities of Training and Reporting Phases [66.01459702625064]
In 5G networks, the training latency is still an obstacle preventing Federated Learning (FL) from being largely adopted.
One of the most fundamental problems that lead to large latency is the bad candidate-selection for FL.
In this paper, we study the proactive candidate-selection for FL in this paper.
arXiv Detail & Related papers (2020-02-03T06:40:04Z) - TiFL: A Tier-based Federated Learning System [17.74678728280232]
Federated Learning (FL) enables learning a shared model across many clients without violating the privacy requirements.
We conduct a case study to show that heterogeneity in resource and data has a significant impact on training time and model accuracy in conventional FL systems.
We propose TiFL, a Tier-based Federated Learning System, which divides clients into tiers based on their training performance and selects clients from the same tier in each training round.
arXiv Detail & Related papers (2020-01-25T01:40:42Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.