Adaptive incentive for cross-silo federated learning: A multi-agent
reinforcement learning approach
- URL: http://arxiv.org/abs/2302.07493v1
- Date: Wed, 15 Feb 2023 06:45:35 GMT
- Title: Adaptive incentive for cross-silo federated learning: A multi-agent
reinforcement learning approach
- Authors: Shijing Yuan, Hongze Liu, Hongtao Lv, Zhanbo Feng, Jie Li, Hongyang
Chen and Chentao Wu
- Abstract summary: Cross-silo federated learning (FL) enables organizations to train global models on isolated data.
We propose a novel adaptive mechanism for cross-silo FL, towards incentivizing organizations to contribute data.
- Score: 12.596779831510508
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Cross-silo federated learning (FL) is a typical FL that enables
organizations(e.g., financial or medical entities) to train global models on
isolated data. Reasonable incentive is key to encouraging organizations to
contribute data. However, existing works on incentivizing cross-silo FL lack
consideration of the environmental dynamics (e.g., precision of the trained
global model and data owned by uncertain clients during the training
processes). Moreover, most of them assume that organizations share private
information, which is unrealistic. To overcome these limitations, we propose a
novel adaptive mechanism for cross-silo FL, towards incentivizing organizations
to contribute data to maximize their long-term payoffs in a real dynamic
training environment. The mechanism is based on multi-agent reinforcement
learning, which learns near-optimal data contribution strategy from the history
of potential games without organizations' private information. Experiments
demonstrate that our mechanism achieves adaptive incentive and effectively
improves the long-term payoffs for organizations.
Related papers
- WallStreetFeds: Client-Specific Tokens as Investment Vehicles in Federated Learning [1.827018440608344]
Federated Learning (FL) is a collaborative machine learning paradigm which allows participants to collectively train a model while training data remains private.<n>In this paper, we propose a novel framework which introduces client-specific tokens as investment vehicles within the FL ecosystem.
arXiv Detail & Related papers (2025-06-25T15:05:01Z) - Incentivizing Inclusive Contributions in Model Sharing Markets [47.66231950174746]
This paper proposes inclusive and incentivized personalized federated learning (iPFL)<n>iPFL incentivizes data holders with diverse purposes to collaboratively train personalized models without revealing raw data.<n> Empirical studies on eleven AI tasks demonstrate that iPFL consistently achieves the highest economic utility.
arXiv Detail & Related papers (2025-05-05T08:45:26Z) - Learning Critically: Selective Self Distillation in Federated Learning on Non-IID Data [17.624808621195978]
We propose a Selective Self-Distillation method for Federated learning (FedSSD)
FedSSD imposes adaptive constraints on the local updates by self-distilling the global model's knowledge.
It achieves better generalization and robustness in fewer communication rounds, compared with other state-of-the-art FL methods.
arXiv Detail & Related papers (2025-04-20T18:06:55Z) - Blockchain-based Framework for Scalable and Incentivized Federated Learning [0.820828081284034]
Federated Learning (FL) enables collaborative model training without sharing raw data, preserving privacy while harnessing distributed datasets.
Traditional FL systems often rely on centralized aggregating mechanisms, introducing trust issues, single points of failure, and limited mechanisms for incentivizing meaningful client contributions.
This paper presents a blockchain-based FL framework that addresses these limitations by integrating smart contracts and a novel hybrid incentive mechanism.
arXiv Detail & Related papers (2025-02-20T00:38:35Z) - Multi-level Personalized Federated Learning on Heterogeneous and Long-Tailed Data [10.64629029156029]
We introduce an innovative personalized Federated Learning framework, Multi-level Personalized Federated Learning (MuPFL)
MuPFL integrates three pivotal modules: Biased Activation Value Dropout (BAVD), Adaptive Cluster-based Model Update (ACMU) and Prior Knowledge-assisted Fine-tuning (PKCF)
Experiments on diverse real-world datasets show that MuPFL consistently outperforms state-of-the-art baselines, even under extreme non-i.i.d. and long-tail conditions.
arXiv Detail & Related papers (2024-05-10T11:52:53Z) - FedEGG: Federated Learning with Explicit Global Guidance [90.04705121816185]
Federated Learning (FL) holds great potential for diverse applications owing to its privacy-preserving nature.
Existing methods help address these challenges via optimization-based client constraints, adaptive client selection, or the use of pre-trained models or synthetic data.
We present bftextFedEGG, a new FL algorithm that constructs a global guiding task using a well-defined, easy-to-converge learning task.
arXiv Detail & Related papers (2024-04-18T04:25:21Z) - A Survey on Efficient Federated Learning Methods for Foundation Model Training [62.473245910234304]
Federated Learning (FL) has become an established technique to facilitate privacy-preserving collaborative training across a multitude of clients.
In the wake of Foundation Models (FM), the reality is different for many deep learning applications.
We discuss the benefits and drawbacks of parameter-efficient fine-tuning (PEFT) for FL applications.
arXiv Detail & Related papers (2024-01-09T10:22:23Z) - Federated Learning with Projected Trajectory Regularization [65.6266768678291]
Federated learning enables joint training of machine learning models from distributed clients without sharing their local data.
One key challenge in federated learning is to handle non-identically distributed data across the clients.
We propose a novel federated learning framework with projected trajectory regularization (FedPTR) for tackling the data issue.
arXiv Detail & Related papers (2023-12-22T02:12:08Z) - Towards Interpretable Federated Learning [19.764172768506132]
Federated learning (FL) enables multiple data owners to build machine learning models collaboratively without exposing their private local data.
It is important to balance the need for performance, privacy-preservation and interpretability, especially in mission critical applications such as finance and healthcare.
We conduct comprehensive analysis of the representative IFL approaches, the commonly adopted performance evaluation metrics, and promising directions towards building versatile IFL techniques.
arXiv Detail & Related papers (2023-02-27T02:06:18Z) - Welfare and Fairness Dynamics in Federated Learning: A Client Selection
Perspective [1.749935196721634]
Federated learning (FL) is a privacy-preserving learning technique that enables distributed computing devices to train shared learning models.
The economic considerations of the clients, such as fairness and incentive, are yet to be fully explored.
We propose a novel incentive mechanism that involves a client selection process to remove low-quality clients and a money transfer process to ensure a fair reward distribution.
arXiv Detail & Related papers (2023-02-17T16:31:19Z) - FedDM: Iterative Distribution Matching for Communication-Efficient
Federated Learning [87.08902493524556]
Federated learning(FL) has recently attracted increasing attention from academia and industry.
We propose FedDM to build the global training objective from multiple local surrogate functions.
In detail, we construct synthetic sets of data on each client to locally match the loss landscape from original data.
arXiv Detail & Related papers (2022-07-20T04:55:18Z) - Incentivizing Federated Learning [2.420324724613074]
This paper presents an incentive mechanism that encourages clients to contribute as much data as they can obtain.
Unlike previous incentive mechanisms, our approach does not monetize data.
We theoretically prove that clients will use as much data as they can possibly possess to participate in federated learning under certain conditions.
arXiv Detail & Related papers (2022-05-22T23:02:43Z) - A Contract Theory based Incentive Mechanism for Federated Learning [52.24418084256517]
Federated learning (FL) serves as a data privacy-preserved machine learning paradigm, and realizes the collaborative model trained by distributed clients.
To accomplish an FL task, the task publisher needs to pay financial incentives to the FL server and FL server offloads the task to the contributing FL clients.
It is challenging to design proper incentives for the FL clients due to the fact that the task is privately trained by the clients.
arXiv Detail & Related papers (2021-08-12T07:30:42Z) - Edge-assisted Democratized Learning Towards Federated Analytics [67.44078999945722]
We show the hierarchical learning structure of the proposed edge-assisted democratized learning mechanism, namely Edge-DemLearn.
We also validate Edge-DemLearn as a flexible model training mechanism to build a distributed control and aggregation methodology in regions.
arXiv Detail & Related papers (2020-12-01T11:46:03Z) - LotteryFL: Personalized and Communication-Efficient Federated Learning
with Lottery Ticket Hypothesis on Non-IID Datasets [52.60094373289771]
Federated learning is a popular distributed machine learning paradigm with enhanced privacy.
We propose LotteryFL -- a personalized and communication-efficient federated learning framework.
We show that LotteryFL significantly outperforms existing solutions in terms of personalization and communication cost.
arXiv Detail & Related papers (2020-08-07T20:45:12Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.