Welfare and Fairness Dynamics in Federated Learning: A Client Selection
Perspective
- URL: http://arxiv.org/abs/2302.08976v1
- Date: Fri, 17 Feb 2023 16:31:19 GMT
- Title: Welfare and Fairness Dynamics in Federated Learning: A Client Selection
Perspective
- Authors: Yash Travadi, Le Peng, Xuan Bi, Ju Sun, Mochen Yang
- Abstract summary: Federated learning (FL) is a privacy-preserving learning technique that enables distributed computing devices to train shared learning models.
The economic considerations of the clients, such as fairness and incentive, are yet to be fully explored.
We propose a novel incentive mechanism that involves a client selection process to remove low-quality clients and a money transfer process to ensure a fair reward distribution.
- Score: 1.749935196721634
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Federated learning (FL) is a privacy-preserving learning technique that
enables distributed computing devices to train shared learning models across
data silos collaboratively. Existing FL works mostly focus on designing
advanced FL algorithms to improve the model performance. However, the economic
considerations of the clients, such as fairness and incentive, are yet to be
fully explored. Without such considerations, self-motivated clients may lose
interest and leave the federation. To address this problem, we designed a novel
incentive mechanism that involves a client selection process to remove
low-quality clients and a money transfer process to ensure a fair reward
distribution. Our experimental results strongly demonstrate that the proposed
incentive mechanism can effectively improve the duration and fairness of the
federation.
Related papers
- Federated Learning Can Find Friends That Are Advantageous [14.993730469216546]
In Federated Learning (FL), the distributed nature and heterogeneity of client data present both opportunities and challenges.
We introduce a novel algorithm that assigns adaptive aggregation weights to clients participating in FL training, identifying those with data distributions most conducive to a specific learning objective.
arXiv Detail & Related papers (2024-02-07T17:46:37Z) - Client Selection in Federated Learning: Principles, Challenges, and
Opportunities [15.33636272844544]
Federated Learning (FL) is a privacy-preserving paradigm for training Machine Learning (ML) models.
In a typical FL scenario, clients exhibit significant heterogeneity in terms of data distribution and hardware configurations.
Various client selection algorithms have been developed, showing promising performance improvement.
arXiv Detail & Related papers (2022-11-03T01:51:14Z) - Fed-CBS: A Heterogeneity-Aware Client Sampling Mechanism for Federated
Learning via Class-Imbalance Reduction [76.26710990597498]
We show that the class-imbalance of the grouped data from randomly selected clients can lead to significant performance degradation.
Based on our key observation, we design an efficient client sampling mechanism, i.e., Federated Class-balanced Sampling (Fed-CBS)
In particular, we propose a measure of class-imbalance and then employ homomorphic encryption to derive this measure in a privacy-preserving way.
arXiv Detail & Related papers (2022-09-30T05:42:56Z) - FedToken: Tokenized Incentives for Data Contribution in Federated
Learning [33.93936816356012]
We propose a contribution-based tokenized incentive scheme, namely textttFedToken, backed by blockchain technology.
We first approximate the contribution of local models during model aggregation, then strategically schedule clients lowering the communication rounds for convergence.
arXiv Detail & Related papers (2022-09-20T14:58:08Z) - Dynamic Attention-based Communication-Efficient Federated Learning [85.18941440826309]
Federated learning (FL) offers a solution to train a global machine learning model.
FL suffers performance degradation when client data distribution is non-IID.
We propose a new adaptive training algorithm $textttAdaFL$ to combat this degradation.
arXiv Detail & Related papers (2021-08-12T14:18:05Z) - A Contract Theory based Incentive Mechanism for Federated Learning [52.24418084256517]
Federated learning (FL) serves as a data privacy-preserved machine learning paradigm, and realizes the collaborative model trained by distributed clients.
To accomplish an FL task, the task publisher needs to pay financial incentives to the FL server and FL server offloads the task to the contributing FL clients.
It is challenging to design proper incentives for the FL clients due to the fact that the task is privately trained by the clients.
arXiv Detail & Related papers (2021-08-12T07:30:42Z) - Prior-Free Auctions for the Demand Side of Federated Learning [0.76146285961466]
Federated learning allows distributed clients to learn a shared machine learning model without sharing their sensitive training data.
We propose a mechanism, FIPFA, to collect monetary contributions from self-interested clients.
We run experiments on the MNIST dataset to test clients' model quality under FIPFA and FIPFA's incentive properties.
arXiv Detail & Related papers (2021-03-26T10:22:18Z) - Blockchain Assisted Decentralized Federated Learning (BLADE-FL):
Performance Analysis and Resource Allocation [119.19061102064497]
We propose a decentralized FL framework by integrating blockchain into FL, namely, blockchain assisted decentralized federated learning (BLADE-FL)
In a round of the proposed BLADE-FL, each client broadcasts its trained model to other clients, competes to generate a block based on the received models, and then aggregates the models from the generated block before its local training of the next round.
We explore the impact of lazy clients on the learning performance of BLADE-FL, and characterize the relationship among the optimal K, the learning parameters, and the proportion of lazy clients.
arXiv Detail & Related papers (2021-01-18T07:19:08Z) - Toward Understanding the Influence of Individual Clients in Federated
Learning [52.07734799278535]
Federated learning allows clients to jointly train a global model without sending their private data to a central server.
We defined a new notion called em-Influence, quantify this influence over parameters, and proposed an effective efficient model to estimate this metric.
arXiv Detail & Related papers (2020-12-20T14:34:36Z) - Blockchain Assisted Decentralized Federated Learning (BLADE-FL) with
Lazy Clients [124.48732110742623]
We propose a novel framework by integrating blockchain into Federated Learning (FL)
BLADE-FL has a good performance in terms of privacy preservation, tamper resistance, and effective cooperation of learning.
It gives rise to a new problem of training deficiency, caused by lazy clients who plagiarize others' trained models and add artificial noises to conceal their cheating behaviors.
arXiv Detail & Related papers (2020-12-02T12:18:27Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.