A Bargaining-based Approach for Feature Trading in Vertical Federated
Learning
- URL: http://arxiv.org/abs/2402.15247v1
- Date: Fri, 23 Feb 2024 10:21:07 GMT
- Title: A Bargaining-based Approach for Feature Trading in Vertical Federated
Learning
- Authors: Yue Cui, Liuyi Yao, Zitao Li, Yaliang Li, Bolin Ding, Xiaofang Zhou
- Abstract summary: We propose a bargaining-based feature trading approach in Vertical Federated Learning (VFL) to encourage economically efficient transactions.
Our model incorporates performance gain-based pricing, taking into account the revenue-based optimization objectives of both parties.
- Score: 54.51890573369637
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Vertical Federated Learning (VFL) has emerged as a popular machine learning
paradigm, enabling model training across the data and the task parties with
different features about the same user set while preserving data privacy. In
production environment, VFL usually involves one task party and one data party.
Fair and economically efficient feature trading is crucial to the
commercialization of VFL, where the task party is considered as the data
consumer who buys the data party's features. However, current VFL feature
trading practices often price the data party's data as a whole and assume
transactions occur prior to the performing VFL. Neglecting the performance
gains resulting from traded features may lead to underpayment and overpayment
issues. In this study, we propose a bargaining-based feature trading approach
in VFL to encourage economically efficient transactions. Our model incorporates
performance gain-based pricing, taking into account the revenue-based
optimization objectives of both parties. We analyze the proposed bargaining
model under perfect and imperfect performance information settings, proving the
existence of an equilibrium that optimizes the parties' objectives. Moreover,
we develop performance gain estimation-based bargaining strategies for
imperfect performance information scenarios and discuss potential security
issues and solutions. Experiments on three real-world datasets demonstrate the
effectiveness of the proposed bargaining model.
Related papers
- Personalized Federated Learning Techniques: Empirical Analysis [2.9521571597754885]
We empirically evaluate ten prominent pFL techniques across various datasets and data splits, uncovering significant differences in their performance.
Our study emphasizes the critical role of communication efficiency in scaling pFL, demonstrating how it can significantly affect resource usage in real-world deployments.
arXiv Detail & Related papers (2024-09-10T18:16:28Z) - Vertical Federated Learning Hybrid Local Pre-training [4.31644387824845]
We propose a novel VFL Hybrid Local Pre-training (VFLHLP) approach for Vertical Federated Learning (VFL)
VFLHLP first pre-trains local networks on the local data of participating parties.
Then it utilizes these pre-trained networks to adjust the sub-model for the labeled party or enhance representation learning for other parties during downstream federated learning on aligned data.
arXiv Detail & Related papers (2024-05-20T08:57:39Z) - LESS: Selecting Influential Data for Targeted Instruction Tuning [64.78894228923619]
We propose LESS, an efficient algorithm to estimate data influences and perform Low-rank gradiEnt Similarity Search for instruction data selection.
We show that training on a LESS-selected 5% of the data can often outperform training on the full dataset across diverse downstream tasks.
Our method goes beyond surface form cues to identify data that the necessary reasoning skills for the intended downstream application.
arXiv Detail & Related papers (2024-02-06T19:18:04Z) - An Auction-based Marketplace for Model Trading in Federated Learning [54.79736037670377]
Federated learning (FL) is increasingly recognized for its efficacy in training models using locally distributed data.
We frame FL as a marketplace of models, where clients act as both buyers and sellers.
We propose an auction-based solution to ensure proper pricing based on performance gain.
arXiv Detail & Related papers (2024-02-02T07:25:53Z) - LoBaSS: Gauging Learnability in Supervised Fine-tuning Data [64.27898739929734]
Supervised Fine-Tuning (SFT) serves as a crucial phase in aligning Large Language Models (LLMs) to specific task prerequisites.
We introduce a new dimension in SFT data selection: learnability.
We present the Loss Based SFT Data Selection (LoBaSS) method, utilizing data learnability as the principal criterion for the selection SFT data.
arXiv Detail & Related papers (2023-10-16T07:26:24Z) - Incentive Allocation in Vertical Federated Learning Based on Bankruptcy
Problem [0.0]
Vertical federated learning (VFL) is a promising approach for collaboratively training machine learning models using private data partitioned vertically across different parties.
In this paper, we focus on the problem of allocating incentives to the passive parties by the active party based on their contributions to the VFL process.
We formulate this problem as a variant of the Nucleolus game theory concept, known as the Bankruptcy Problem, and solve it using the Talmud's division rule.
arXiv Detail & Related papers (2023-07-07T11:08:18Z) - VFed-SSD: Towards Practical Vertical Federated Advertising [53.08038962443853]
We propose a semi-supervised split distillation framework VFed-SSD to alleviate the two limitations.
Specifically, we develop a self-supervised task MatchedPair Detection (MPD) to exploit the vertically partitioned unlabeled data.
Our framework provides an efficient federation-enhanced solution for real-time display advertising with minimal deploying cost and significant performance lift.
arXiv Detail & Related papers (2022-05-31T17:45:30Z) - Data Valuation for Vertical Federated Learning: A Model-free and
Privacy-preserving Method [14.451118953357605]
FedValue is a privacy-preserving, task-specific but model-free data valuation method for Vertical Federated learning (VFL)
We first introduce a novel data valuation metric, namely MShapley-CMI. The metric evaluates a data party's contribution to a predictive analytics task without the need of executing a machine learning model.
Next, we develop an innovative federated method that calculates the MShapley-CMI value for each data party in a privacy-preserving manner.
arXiv Detail & Related papers (2021-12-15T02:42:28Z) - A Principled Approach to Data Valuation for Federated Learning [73.19984041333599]
Federated learning (FL) is a popular technique to train machine learning (ML) models on decentralized data sources.
The Shapley value (SV) defines a unique payoff scheme that satisfies many desiderata for a data value notion.
This paper proposes a variant of the SV amenable to FL, which we call the federated Shapley value.
arXiv Detail & Related papers (2020-09-14T04:37:54Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.