Collaboration in Participant-Centric Federated Learning: A
Game-Theoretical Perspective
- URL: http://arxiv.org/abs/2207.12030v1
- Date: Mon, 25 Jul 2022 10:12:22 GMT
- Title: Collaboration in Participant-Centric Federated Learning: A
Game-Theoretical Perspective
- Authors: Guangjing Huang and Xu Chen and Tao Ouyang and Qian Ma and Lin Chen
and Junshan Zhang
- Abstract summary: Federated learning (FL) is a promising distributed framework for collaborative artificial intelligence model training.
A bootstrapping component that has attracted significant research attention is the design of incentive mechanism to stimulate user collaboration in FL.
Few works consider forging participant-centric collaboration among participants to pursue an FL model for their common interests.
We propose a novel analytic framework for incentivizing effective and efficient collaborations for participant-centric FL.
- Score: 29.06665697241795
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Federated learning (FL) is a promising distributed framework for
collaborative artificial intelligence model training while protecting user
privacy. A bootstrapping component that has attracted significant research
attention is the design of incentive mechanism to stimulate user collaboration
in FL. The majority of works adopt a broker-centric approach to help the
central operator to attract participants and further obtain a well-trained
model. Few works consider forging participant-centric collaboration among
participants to pursue an FL model for their common interests, which induces
dramatic differences in incentive mechanism design from the broker-centric FL.
To coordinate the selfish and heterogeneous participants, we propose a novel
analytic framework for incentivizing effective and efficient collaborations for
participant-centric FL. Specifically, we respectively propose two novel game
models for contribution-oblivious FL (COFL) and contribution-aware FL (CAFL),
where the latter one implements a minimum contribution threshold mechanism. We
further analyze the uniqueness and existence for Nash equilibrium of both COFL
and CAFL games and design efficient algorithms to achieve equilibrium
solutions. Extensive performance evaluations show that there exists free-riding
phenomenon in COFL, which can be greatly alleviated through the adoption of
CAFL model with the optimized minimum threshold.
Related papers
- Toward a Sustainable Federated Learning Ecosystem: A Practical Least Core Mechanism for Payoff Allocation [71.86087908416255]
We introduce a payoff allocation framework based on the least core (LC) concept.<n>Unlike traditional methods, the LC prioritizes the cohesion of the federation by minimizing the maximum dissatisfaction.<n>Case studies in federated intrusion detection demonstrate that our mechanism correctly identifies pivotal contributors and strategic alliances.
arXiv Detail & Related papers (2026-02-03T11:10:50Z) - WallStreetFeds: Client-Specific Tokens as Investment Vehicles in Federated Learning [1.827018440608344]
Federated Learning (FL) is a collaborative machine learning paradigm which allows participants to collectively train a model while training data remains private.<n>In this paper, we propose a novel framework which introduces client-specific tokens as investment vehicles within the FL ecosystem.
arXiv Detail & Related papers (2025-06-25T15:05:01Z) - Redefining Contributions: Shapley-Driven Federated Learning [3.9539878659683363]
Federated learning (FL) has emerged as a pivotal approach in machine learning.
It is challenging to ensure global model convergence when participants do not contribute equally and/or honestly.
This paper proposes a novel contribution assessment method called ShapFed for fine-grained evaluation of participant contributions in FL.
arXiv Detail & Related papers (2024-06-01T22:40:31Z) - FedSAC: Dynamic Submodel Allocation for Collaborative Fairness in Federated Learning [46.30755524556465]
We present FedSAC, a novel Federated learning framework with dynamic Submodel Allocation for Collaborative fairness.
We develop a submodel allocation module with a theoretical guarantee of fairness.
Experiments conducted on three public benchmarks demonstrate that FedSAC outperforms all baseline methods in both fairness and model accuracy.
arXiv Detail & Related papers (2024-05-28T15:43:29Z) - FedCompetitors: Harmonious Collaboration in Federated Learning with
Competing Participants [41.070716405671206]
Federated learning (FL) provides a privacy-preserving approach for collaborative training of machine learning models.
It is crucial to select appropriate collaborators for each FL participant based on data complementarity.
It is imperative to consider the inter-individual relationships among FL-PTs where some FL-PTs engage in competition.
arXiv Detail & Related papers (2023-12-18T17:53:01Z) - Deep Equilibrium Models Meet Federated Learning [71.57324258813675]
This study explores the problem of Federated Learning (FL) by utilizing the Deep Equilibrium (DEQ) models instead of conventional deep learning networks.
We claim that incorporating DEQ models into the federated learning framework naturally addresses several open problems in FL.
To the best of our knowledge, this study is the first to establish a connection between DEQ models and federated learning.
arXiv Detail & Related papers (2023-05-29T22:51:40Z) - Vertical Federated Learning over Cloud-RAN: Convergence Analysis and
System Optimization [82.12796238714589]
We propose a novel cloud radio access network (Cloud-RAN) based vertical FL system to enable fast and accurate model aggregation.
We characterize the convergence behavior of the vertical FL algorithm considering both uplink and downlink transmissions.
We establish a system optimization framework by joint transceiver and fronthaul quantization design, for which successive convex approximation and alternate convex search based system optimization algorithms are developed.
arXiv Detail & Related papers (2023-05-04T09:26:03Z) - Welfare and Fairness Dynamics in Federated Learning: A Client Selection
Perspective [1.749935196721634]
Federated learning (FL) is a privacy-preserving learning technique that enables distributed computing devices to train shared learning models.
The economic considerations of the clients, such as fairness and incentive, are yet to be fully explored.
We propose a novel incentive mechanism that involves a client selection process to remove low-quality clients and a money transfer process to ensure a fair reward distribution.
arXiv Detail & Related papers (2023-02-17T16:31:19Z) - Efficient Split-Mix Federated Learning for On-Demand and In-Situ
Customization [107.72786199113183]
Federated learning (FL) provides a distributed learning framework for multiple participants to collaborate learning without sharing raw data.
In this paper, we propose a novel Split-Mix FL strategy for heterogeneous participants that, once training is done, provides in-situ customization of model sizes and robustness.
arXiv Detail & Related papers (2022-03-18T04:58:34Z) - A Contract Theory based Incentive Mechanism for Federated Learning [52.24418084256517]
Federated learning (FL) serves as a data privacy-preserved machine learning paradigm, and realizes the collaborative model trained by distributed clients.
To accomplish an FL task, the task publisher needs to pay financial incentives to the FL server and FL server offloads the task to the contributing FL clients.
It is challenging to design proper incentives for the FL clients due to the fact that the task is privately trained by the clients.
arXiv Detail & Related papers (2021-08-12T07:30:42Z) - An Incentive Mechanism for Federated Learning in Wireless Cellular
network: An Auction Approach [75.08185720590748]
Federated Learning (FL) is a distributed learning framework that can deal with the distributed issue in machine learning.
In this paper, we consider a FL system that involves one base station (BS) and multiple mobile users.
We formulate the incentive mechanism between the BS and mobile users as an auction game where the BS is an auctioneer and the mobile users are the sellers.
arXiv Detail & Related papers (2020-09-22T01:50:39Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.