Incentive Allocation in Vertical Federated Learning Based on Bankruptcy
Problem
- URL: http://arxiv.org/abs/2307.03515v1
- Date: Fri, 7 Jul 2023 11:08:18 GMT
- Title: Incentive Allocation in Vertical Federated Learning Based on Bankruptcy
Problem
- Authors: Afsana Khan, Marijn ten Thij, Frank Thuijsman and Anna Wilbik
- Abstract summary: Vertical federated learning (VFL) is a promising approach for collaboratively training machine learning models using private data partitioned vertically across different parties.
In this paper, we focus on the problem of allocating incentives to the passive parties by the active party based on their contributions to the VFL process.
We formulate this problem as a variant of the Nucleolus game theory concept, known as the Bankruptcy Problem, and solve it using the Talmud's division rule.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Vertical federated learning (VFL) is a promising approach for collaboratively
training machine learning models using private data partitioned vertically
across different parties. Ideally in a VFL setting, the active party (party
possessing features of samples with labels) benefits by improving its machine
learning model through collaboration with some passive parties (parties
possessing additional features of the same samples without labels) in a privacy
preserving manner. However, motivating passive parties to participate in VFL
can be challenging. In this paper, we focus on the problem of allocating
incentives to the passive parties by the active party based on their
contributions to the VFL process. We formulate this problem as a variant of the
Nucleolus game theory concept, known as the Bankruptcy Problem, and solve it
using the Talmud's division rule. We evaluate our proposed method on synthetic
and real-world datasets and show that it ensures fairness and stability in
incentive allocation among passive parties who contribute their data to the
federated model. Additionally, we compare our method to the existing solution
of calculating Shapley values and show that our approach provides a more
efficient solution with fewer computations.
Related papers
- Towards Active Participant-Centric Vertical Federated Learning: Some Representations May Be All You Need [0.0]
We introduce a novel simplified approach to Vertical Federated Learning (VFL)
Active Participant-Centric VFL allows the active participant to do inference in a non collaborative fashion.
This method integrates unsupervised representation learning with knowledge distillation to achieve comparable accuracy to traditional VFL methods.
arXiv Detail & Related papers (2024-10-23T08:07:00Z) - Redefining Contributions: Shapley-Driven Federated Learning [3.9539878659683363]
Federated learning (FL) has emerged as a pivotal approach in machine learning.
It is challenging to ensure global model convergence when participants do not contribute equally and/or honestly.
This paper proposes a novel contribution assessment method called ShapFed for fine-grained evaluation of participant contributions in FL.
arXiv Detail & Related papers (2024-06-01T22:40:31Z) - A Bargaining-based Approach for Feature Trading in Vertical Federated
Learning [54.51890573369637]
We propose a bargaining-based feature trading approach in Vertical Federated Learning (VFL) to encourage economically efficient transactions.
Our model incorporates performance gain-based pricing, taking into account the revenue-based optimization objectives of both parties.
arXiv Detail & Related papers (2024-02-23T10:21:07Z) - VFedMH: Vertical Federated Learning for Training Multiple Heterogeneous
Models [53.30484242706966]
This paper proposes a novel approach called Vertical federated learning for training multiple Heterogeneous models (VFedMH)
To protect the participants' local embedding values, we propose an embedding protection method based on lightweight blinding factors.
Experiments are conducted to demonstrate that VFedMH can simultaneously train multiple heterogeneous models with heterogeneous optimization and outperform some recent methods in model performance.
arXiv Detail & Related papers (2023-10-20T09:22:51Z) - Vertical Semi-Federated Learning for Efficient Online Advertising [50.18284051956359]
Semi-VFL (Vertical Semi-Federated Learning) is proposed to achieve a practical industry application fashion for VFL.
We build an inference-efficient single-party student model applicable to the whole sample space.
New representation distillation methods are designed to extract cross-party feature correlations for both the overlapped and non-overlapped data.
arXiv Detail & Related papers (2022-09-30T17:59:27Z) - Achieving Model Fairness in Vertical Federated Learning [47.8598060954355]
Vertical federated learning (VFL) enables multiple enterprises possessing non-overlapped features to strengthen their machine learning models without disclosing their private data and model parameters.
VFL suffers from fairness issues, i.e., the learned model may be unfairly discriminatory over the group with sensitive attributes.
We propose a fair VFL framework to tackle this problem.
arXiv Detail & Related papers (2021-09-17T04:40:11Z) - GTG-Shapley: Efficient and Accurate Participant Contribution Evaluation
in Federated Learning [25.44023017628766]
Federated Learning (FL) bridges the gap between collaborative machine learning and preserving data privacy.
It is essential to fairly evaluate participants' contribution to the performance of the final FL model without exposing their private data.
We propose the Guided Truncation Gradient Shapley approach to address this challenge.
arXiv Detail & Related papers (2021-09-05T12:17:00Z) - Federated Robustness Propagation: Sharing Adversarial Robustness in
Federated Learning [98.05061014090913]
Federated learning (FL) emerges as a popular distributed learning schema that learns from a set of participating users without requiring raw data to be shared.
adversarial training (AT) provides a sound solution for centralized learning, extending its usage for FL users has imposed significant challenges.
We show that existing FL techniques cannot effectively propagate adversarial robustness among non-iid users.
We propose a simple yet effective propagation approach that transfers robustness through carefully designed batch-normalization statistics.
arXiv Detail & Related papers (2021-06-18T15:52:33Z) - A Principled Approach to Data Valuation for Federated Learning [73.19984041333599]
Federated learning (FL) is a popular technique to train machine learning (ML) models on decentralized data sources.
The Shapley value (SV) defines a unique payoff scheme that satisfies many desiderata for a data value notion.
This paper proposes a variant of the SV amenable to FL, which we call the federated Shapley value.
arXiv Detail & Related papers (2020-09-14T04:37:54Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.