Game of Privacy: Towards Better Federated Platform Collaboration under
Privacy Restriction
- URL: http://arxiv.org/abs/2202.05139v1
- Date: Thu, 10 Feb 2022 16:45:40 GMT
- Title: Game of Privacy: Towards Better Federated Platform Collaboration under
Privacy Restriction
- Authors: Chuhan Wu, Fangzhao Wu, Tao Qi, Yanlin Wang, Yongfeng Huang, Xing Xie
- Abstract summary: Vertical federated learning (VFL) aims to train models from cross-silo data with different feature spaces stored on different platforms.
Due to the intrinsic privacy risks of federated learning, the total amount of involved data may be constrained.
We propose to incent different platforms through a reciprocal collaboration, where all platforms can exploit multi-platform information in the VFL framework to benefit their own tasks.
- Score: 95.12382372267724
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Vertical federated learning (VFL) aims to train models from cross-silo data
with different feature spaces stored on different platforms. Existing VFL
methods usually assume all data on each platform can be used for model
training. However, due to the intrinsic privacy risks of federated learning,
the total amount of involved data may be constrained. In addition, existing VFL
studies usually assume only one platform has task labels and can benefit from
the collaboration, making it difficult to attract other platforms to join in
the collaborative learning. In this paper, we study the platform collaboration
problem in VFL under privacy constraint. We propose to incent different
platforms through a reciprocal collaboration, where all platforms can exploit
multi-platform information in the VFL framework to benefit their own tasks.
With limited privacy budgets, each platform needs to wisely allocate its data
quotas for collaboration with other platforms. Thereby, they naturally form a
multi-party game. There are two core problems in this game, i.e., how to
appraise other platforms' data value to compute game rewards and how to
optimize policies to solve the game. To evaluate the contributions of other
platforms' data, each platform offers a small amount of "deposit" data to
participate in the VFL. We propose a performance estimation method to predict
the expected model performance when involving different amount combinations of
inter-platform data. To solve the game, we propose a platform negotiation
method that simulates the bargaining among platforms and locally optimizes
their policies via gradient descent. Extensive experiments on two real-world
datasets show that our approach can effectively facilitate the collaborative
exploitation of multi-platform data in VFL under privacy restrictions.
Related papers
- FedUD: Exploiting Unaligned Data for Cross-Platform Federated Click-Through Rate Prediction [3.221675775415278]
Click-through rate (CTR) prediction plays an important role in online advertising platforms.
Due to privacy concerns, data from different platforms cannot be uploaded to a server for centralized model training.
We propose FedUD, which is able to exploit unaligned data, in addition to aligned data, for more accurate federated CTR prediction.
arXiv Detail & Related papers (2024-07-26T02:48:32Z) - A Bargaining-based Approach for Feature Trading in Vertical Federated
Learning [54.51890573369637]
We propose a bargaining-based feature trading approach in Vertical Federated Learning (VFL) to encourage economically efficient transactions.
Our model incorporates performance gain-based pricing, taking into account the revenue-based optimization objectives of both parties.
arXiv Detail & Related papers (2024-02-23T10:21:07Z) - Can Public Large Language Models Help Private Cross-device Federated Learning? [58.05449579773249]
We study (differentially) private federated learning (FL) of language models.
Public data has been used to improve privacy-utility trade-offs for both large and small language models.
We propose a novel distribution matching algorithm with theoretical grounding to sample public data close to private data distribution.
arXiv Detail & Related papers (2023-05-20T07:55:58Z) - Quadratic Functional Encryption for Secure Training in Vertical
Federated Learning [26.188083606166806]
Vertical federated learning (VFL) enables the collaborative training of machine learning (ML) models in settings where the data is distributed amongst multiple parties.
In VFL, the labels are available to a single party and the complete feature set is formed only when data from all parties is combined.
Recently, Xu et al. proposed a new framework called FedV for secure gradient computation for VFL using multi-input functional encryption.
arXiv Detail & Related papers (2023-05-15T05:31:35Z) - Vertical Federated Learning: A Structured Literature Review [0.0]
Federated learning (FL) has emerged as a promising distributed learning paradigm with an added advantage of data privacy.
In this paper, we present a structured literature review discussing the state-of-the-art approaches in VFL.
arXiv Detail & Related papers (2022-12-01T16:16:41Z) - Scalable Collaborative Learning via Representation Sharing [53.047460465980144]
Federated learning (FL) and Split Learning (SL) are two frameworks that enable collaborative learning while keeping the data private (on device)
In FL, each data holder trains a model locally and releases it to a central server for aggregation.
In SL, the clients must release individual cut-layer activations (smashed data) to the server and wait for its response (during both inference and back propagation).
In this work, we present a novel approach for privacy-preserving machine learning, where the clients collaborate via online knowledge distillation using a contrastive loss.
arXiv Detail & Related papers (2022-11-20T10:49:22Z) - BlindFL: Vertical Federated Machine Learning without Peeking into Your
Data [20.048695060411774]
Vertical federated learning (VFL) describes a case where ML models are built upon the private data of different participated parties.
We introduce BlindFL, a novel framework for VFL training and inference.
We show that BlindFL supports diverse datasets and models efficiently whilst achieving robust privacy guarantees.
arXiv Detail & Related papers (2022-06-16T07:26:50Z) - FairVFL: A Fair Vertical Federated Learning Framework with Contrastive
Adversarial Learning [102.92349569788028]
We propose a fair vertical federated learning framework (FairVFL) to improve the fairness of VFL models.
The core idea of FairVFL is to learn unified and fair representations of samples based on the decentralized feature fields in a privacy-preserving way.
For protecting user privacy, we propose a contrastive adversarial learning method to remove private information from the unified representation in server.
arXiv Detail & Related papers (2022-06-07T11:43:32Z) - FedNLP: A Research Platform for Federated Learning in Natural Language
Processing [55.01246123092445]
We present the FedNLP, a research platform for federated learning in NLP.
FedNLP supports various popular task formulations in NLP such as text classification, sequence tagging, question answering, seq2seq generation, and language modeling.
Preliminary experiments with FedNLP reveal that there exists a large performance gap between learning on decentralized and centralized datasets.
arXiv Detail & Related papers (2021-04-18T11:04:49Z) - Hybrid Differentially Private Federated Learning on Vertically
Partitioned Data [41.7896466307821]
We present HDP-VFL, the first hybrid differentially private (DP) framework for vertical federated learning (VFL)
We analyze how VFL's intermediate result (IR) can leak private information of the training data during communication.
We mathematically prove that our algorithm not only provides utility guarantees for VFL, but also offers multi-level privacy.
arXiv Detail & Related papers (2020-09-06T16:06:04Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.