A Contract Theory based Incentive Mechanism for Federated Learning
- URL: http://arxiv.org/abs/2108.05568v1
- Date: Thu, 12 Aug 2021 07:30:42 GMT
- Title: A Contract Theory based Incentive Mechanism for Federated Learning
- Authors: Mengmeng Tian, Yuxin Chen, Yuan Liu, Zehui Xiong, Cyril Leung, Chunyan
Miao
- Abstract summary: Federated learning (FL) serves as a data privacy-preserved machine learning paradigm, and realizes the collaborative model trained by distributed clients.
To accomplish an FL task, the task publisher needs to pay financial incentives to the FL server and FL server offloads the task to the contributing FL clients.
It is challenging to design proper incentives for the FL clients due to the fact that the task is privately trained by the clients.
- Score: 52.24418084256517
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Federated learning (FL) serves as a data privacy-preserved machine learning
paradigm, and realizes the collaborative model trained by distributed clients.
To accomplish an FL task, the task publisher needs to pay financial incentives
to the FL server and FL server offloads the task to the contributing FL
clients. It is challenging to design proper incentives for the FL clients due
to the fact that the task is privately trained by the clients. This paper aims
to propose a contract theory based FL task training model towards minimizing
incentive budget subject to clients being individually rational (IR) and
incentive compatible (IC) in each FL training round. We design a
two-dimensional contract model by formally defining two private types of
clients, namely data quality and computation effort. To effectively aggregate
the trained models, a contract-based aggregator is proposed. We analyze the
feasible and optimal contract solutions to the proposed contract model.
%Experimental results demonstrate that the proposed framework and contract
model can effective improve the generation accuracy of FL tasks. Experimental
results show that the generalization accuracy of the FL tasks can be improved
by the proposed incentive mechanism where contract-based aggregation is
applied.
Related papers
- Fair Concurrent Training of Multiple Models in Federated Learning [32.74516106486226]
Federated learning (FL) enables collaborative learning across multiple clients.
Recent proliferation of FL applications may increasingly require multiple FL tasks to be trained simultaneously.
Current MMFL algorithms use naive average-based client-task allocation schemes.
We propose a difficulty-aware algorithm that dynamically allocates clients to tasks in each training round.
arXiv Detail & Related papers (2024-04-22T02:41:10Z) - A Survey on Efficient Federated Learning Methods for Foundation Model Training [62.473245910234304]
Federated Learning (FL) has become an established technique to facilitate privacy-preserving collaborative training across a multitude of clients.
In the wake of Foundation Models (FM), the reality is different for many deep learning applications.
We discuss the benefits and drawbacks of parameter-efficient fine-tuning (PEFT) for FL applications.
arXiv Detail & Related papers (2024-01-09T10:22:23Z) - Price-Discrimination Game for Distributed Resource Management in Federated Learning [3.724337025141794]
In vanilla federated learning (FL) such as FedAvg, the parameter server (PS) and multiple distributed clients can form a typical buyer's market.
This paper proposes to differentiate the pricing for services provided by different clients rather than simply providing the same service pricing for different clients.
arXiv Detail & Related papers (2023-08-26T10:09:46Z) - BARA: Efficient Incentive Mechanism with Online Reward Budget Allocation
in Cross-Silo Federated Learning [25.596968764427043]
Federated learning (FL) is a prospective distributed machine learning framework that can preserve data privacy.
In cross-silo FL, an incentive mechanism is indispensable for motivating data owners to contribute their models to FL training.
We propose an online reward budget allocation algorithm using Bayesian optimization named BARA.
arXiv Detail & Related papers (2023-05-09T07:36:01Z) - Can Fair Federated Learning reduce the need for Personalisation? [9.595853312558276]
Federated Learning (FL) enables training ML models on edge clients without sharing data.
This paper evaluates two Fair FL (FFL) algorithms as starting points for personalisation.
We propose Personalisation-aware Federated Learning (PaFL) as a paradigm that pre-emptively uses personalisation losses during training.
arXiv Detail & Related papers (2023-05-04T11:03:33Z) - Welfare and Fairness Dynamics in Federated Learning: A Client Selection
Perspective [1.749935196721634]
Federated learning (FL) is a privacy-preserving learning technique that enables distributed computing devices to train shared learning models.
The economic considerations of the clients, such as fairness and incentive, are yet to be fully explored.
We propose a novel incentive mechanism that involves a client selection process to remove low-quality clients and a money transfer process to ensure a fair reward distribution.
arXiv Detail & Related papers (2023-02-17T16:31:19Z) - FL Games: A Federated Learning Framework for Distribution Shifts [71.98708418753786]
Federated learning aims to train predictive models for data that is distributed across clients, under the orchestration of a server.
We propose FL GAMES, a game-theoretic framework for federated learning that learns causal features that are invariant across clients.
arXiv Detail & Related papers (2022-10-31T22:59:03Z) - Blockchain Assisted Decentralized Federated Learning (BLADE-FL):
Performance Analysis and Resource Allocation [119.19061102064497]
We propose a decentralized FL framework by integrating blockchain into FL, namely, blockchain assisted decentralized federated learning (BLADE-FL)
In a round of the proposed BLADE-FL, each client broadcasts its trained model to other clients, competes to generate a block based on the received models, and then aggregates the models from the generated block before its local training of the next round.
We explore the impact of lazy clients on the learning performance of BLADE-FL, and characterize the relationship among the optimal K, the learning parameters, and the proportion of lazy clients.
arXiv Detail & Related papers (2021-01-18T07:19:08Z) - Toward Understanding the Influence of Individual Clients in Federated
Learning [52.07734799278535]
Federated learning allows clients to jointly train a global model without sending their private data to a central server.
We defined a new notion called em-Influence, quantify this influence over parameters, and proposed an effective efficient model to estimate this metric.
arXiv Detail & Related papers (2020-12-20T14:34:36Z) - Blockchain Assisted Decentralized Federated Learning (BLADE-FL) with
Lazy Clients [124.48732110742623]
We propose a novel framework by integrating blockchain into Federated Learning (FL)
BLADE-FL has a good performance in terms of privacy preservation, tamper resistance, and effective cooperation of learning.
It gives rise to a new problem of training deficiency, caused by lazy clients who plagiarize others' trained models and add artificial noises to conceal their cheating behaviors.
arXiv Detail & Related papers (2020-12-02T12:18:27Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.