Multi-Session Budget Optimization for Forward Auction-based Federated
Learning
- URL: http://arxiv.org/abs/2311.12548v1
- Date: Tue, 21 Nov 2023 11:57:41 GMT
- Title: Multi-Session Budget Optimization for Forward Auction-based Federated
Learning
- Authors: Xiaoli Tang, Han Yu
- Abstract summary: Auction-based Federated Learning (AFL) has emerged as an important research field in recent years.
We propose the Multi-session Budget Optimization Strategy for forward Auction-based Federated Learning (MultiBOS-AFL)
Based on hierarchical reinforcement learning, MultiBOS-AFL jointly optimize inter-session budget pacing and intra-session bidding for AFL MUs.
- Score: 17.546044136396468
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Auction-based Federated Learning (AFL) has emerged as an important research
field in recent years. The prevailing strategies for FL model users (MUs)
assume that the entire team of the required data owners (DOs) for an FL task
must be assembled before training can commence. In practice, an MU can trigger
the FL training process multiple times. DOs can thus be gradually recruited
over multiple FL model training sessions. Existing bidding strategies for AFL
MUs are not designed to handle such scenarios. Therefore, the problem of
multi-session AFL remains open. To address this problem, we propose the
Multi-session Budget Optimization Strategy for forward Auction-based Federated
Learning (MultiBOS-AFL). Based on hierarchical reinforcement learning,
MultiBOS-AFL jointly optimizes inter-session budget pacing and intra-session
bidding for AFL MUs, with the objective of maximizing the total utility.
Extensive experiments on six benchmark datasets show that it significantly
outperforms seven state-of-the-art approaches. On average, MultiBOS-AFL
achieves 12.28% higher utility, 14.52% more data acquired through auctions for
a given budget, and 1.23% higher test accuracy achieved by the resulting FL
model compared to the best baseline. To the best of our knowledge, it is the
first budget optimization decision support method with budget pacing capability
designed for MUs in multi-session forward auction-based federated learning
Related papers
- Resource-Efficient Federated Multimodal Learning via Layer-wise and Progressive Training [15.462969044840868]
We introduce LW-FedMML, a layer-wise federated multimodal learning approach which decomposes the training process into multiple stages.
We conduct extensive experiments across various FL and multimodal learning settings to validate the effectiveness of our proposed method.
Specifically, LW-FedMML reduces memory usage by up to $2.7times$, computational operations (FLOPs) by $2.4times$, and total communication cost by $2.3times$.
arXiv Detail & Related papers (2024-07-22T07:06:17Z) - Iterative Nash Policy Optimization: Aligning LLMs with General Preferences via No-Regret Learning [55.65738319966385]
We propose a novel online algorithm, iterative Nash policy optimization (INPO)
Unlike previous methods, INPO bypasses the need for estimating the expected win rate for individual responses.
With an LLaMA-3-8B-based SFT model, INPO achieves a 42.6% length-controlled win rate on AlpacaEval 2.0 and a 37.8% win rate on Arena-Hard.
arXiv Detail & Related papers (2024-06-30T08:00:34Z) - Agent-oriented Joint Decision Support for Data Owners in Auction-based Federated Learning [32.6997332038178]
Auction-based Federated Learning (AFL) has attracted extensive research interest due to its ability to motivate data owners (DOs) to join FL through economic means.
We propose a first-of-its-kind agent-oriented joint Pricing, Acceptance and Sub-delegation decision support approach for data owners in AFL (PAS-AFL)
It is the first to enable each DO to take on multiple FL tasks simultaneously to earn higher income for DOs and enhance the throughput of FL tasks in the AFL ecosystem.
arXiv Detail & Related papers (2024-05-09T02:35:46Z) - Hire When You Need to: Gradual Participant Recruitment for Auction-based
Federated Learning [16.83897148104]
We propose a Gradual Participant Selection scheme for Federated Learning (GPS-AFL)
GPS-AFL gradually selects the required DOs over multiple rounds of training as more information is revealed through repeated interactions.
It is designed to strike a balance between cost saving and performance enhancement, while mitigating the drawbacks of selection bias in reputation-based FL.
arXiv Detail & Related papers (2023-10-04T08:19:04Z) - Utility-Maximizing Bidding Strategy for Data Consumers in Auction-based
Federated Learning [14.410324763825733]
Auction-based Federated Learning (AFL) has attracted extensive research interest due to its ability to motivate data owners to join FL through economic means.
This paper proposes a first-of-its-kind utility-maximizing bidding strategy for data consumers in federated learning (Fed-Bidder)
arXiv Detail & Related papers (2023-05-11T13:16:36Z) - DPP-based Client Selection for Federated Learning with Non-IID Data [97.1195165400568]
This paper proposes a client selection (CS) method to tackle the communication bottleneck of federated learning (FL)
We first analyze the effect of CS in FL and show that FL training can be accelerated by adequately choosing participants to diversify the training dataset in each round of training.
We leverage data profiling and determinantal point process (DPP) sampling techniques to develop an algorithm termed Federated Learning with DPP-based Participant Selection (FL-DP$3$S)
arXiv Detail & Related papers (2023-03-30T13:14:54Z) - GTFLAT: Game Theory Based Add-On For Empowering Federated Learning
Aggregation Techniques [0.3867363075280543]
GTFLAT, as a game theory-based add-on, addresses an important research question.
How can a federated learning algorithm achieve better performance and training efficiency by setting more effective adaptive weights for averaging in the model aggregation phase?
The results reveal that, on average, using GTFLAT increases the top-1 test accuracy by 1.38%, while it needs 21.06% fewer communication rounds to reach the accuracy.
arXiv Detail & Related papers (2022-12-08T06:39:51Z) - Semi-Synchronous Personalized Federated Learning over Mobile Edge
Networks [88.50555581186799]
We propose a semi-synchronous PFL algorithm, termed as Semi-Synchronous Personalized FederatedAveraging (PerFedS$2$), over mobile edge networks.
We derive an upper bound of the convergence rate of PerFedS2 in terms of the number of participants per global round and the number of rounds.
Experimental results verify the effectiveness of PerFedS2 in saving training time as well as guaranteeing the convergence of training loss.
arXiv Detail & Related papers (2022-09-27T02:12:43Z) - Efficient Split-Mix Federated Learning for On-Demand and In-Situ
Customization [107.72786199113183]
Federated learning (FL) provides a distributed learning framework for multiple participants to collaborate learning without sharing raw data.
In this paper, we propose a novel Split-Mix FL strategy for heterogeneous participants that, once training is done, provides in-situ customization of model sizes and robustness.
arXiv Detail & Related papers (2022-03-18T04:58:34Z) - Achieving Personalized Federated Learning with Sparse Local Models [75.76854544460981]
Federated learning (FL) is vulnerable to heterogeneously distributed data.
To counter this issue, personalized FL (PFL) was proposed to produce dedicated local models for each individual user.
Existing PFL solutions either demonstrate unsatisfactory generalization towards different model architectures or cost enormous extra computation and memory.
We proposeFedSpa, a novel PFL scheme that employs personalized sparse masks to customize sparse local models on the edge.
arXiv Detail & Related papers (2022-01-27T08:43:11Z) - Federated Robustness Propagation: Sharing Adversarial Robustness in
Federated Learning [98.05061014090913]
Federated learning (FL) emerges as a popular distributed learning schema that learns from a set of participating users without requiring raw data to be shared.
adversarial training (AT) provides a sound solution for centralized learning, extending its usage for FL users has imposed significant challenges.
We show that existing FL techniques cannot effectively propagate adversarial robustness among non-iid users.
We propose a simple yet effective propagation approach that transfers robustness through carefully designed batch-normalization statistics.
arXiv Detail & Related papers (2021-06-18T15:52:33Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.