Quadratic Functional Encryption for Secure Training in Vertical
Federated Learning
- URL: http://arxiv.org/abs/2305.08358v2
- Date: Mon, 19 Jun 2023 10:01:55 GMT
- Title: Quadratic Functional Encryption for Secure Training in Vertical
Federated Learning
- Authors: Shuangyi Chen, Anuja Modi, Shweta Agrawal, Ashish Khisti
- Abstract summary: Vertical federated learning (VFL) enables the collaborative training of machine learning (ML) models in settings where the data is distributed amongst multiple parties.
In VFL, the labels are available to a single party and the complete feature set is formed only when data from all parties is combined.
Recently, Xu et al. proposed a new framework called FedV for secure gradient computation for VFL using multi-input functional encryption.
- Score: 26.188083606166806
- License: http://creativecommons.org/publicdomain/zero/1.0/
- Abstract: Vertical federated learning (VFL) enables the collaborative training of
machine learning (ML) models in settings where the data is distributed amongst
multiple parties who wish to protect the privacy of their individual data.
Notably, in VFL, the labels are available to a single party and the complete
feature set is formed only when data from all parties is combined. Recently, Xu
et al. proposed a new framework called FedV for secure gradient computation for
VFL using multi-input functional encryption. In this work, we explain how some
of the information leakage in Xu et al. can be avoided by using Quadratic
functional encryption when training generalized linear models for vertical
federated learning.
Related papers
- Hijack Vertical Federated Learning Models As One Party [43.095945038428404]
Vertical federated learning (VFL) is an emerging paradigm that enables collaborators to build machine learning models together in a distributed fashion.
Existing VFL frameworks use cryptographic techniques to provide data privacy and security guarantees.
arXiv Detail & Related papers (2022-12-01T07:12:38Z) - Scalable Collaborative Learning via Representation Sharing [53.047460465980144]
Federated learning (FL) and Split Learning (SL) are two frameworks that enable collaborative learning while keeping the data private (on device)
In FL, each data holder trains a model locally and releases it to a central server for aggregation.
In SL, the clients must release individual cut-layer activations (smashed data) to the server and wait for its response (during both inference and back propagation).
In this work, we present a novel approach for privacy-preserving machine learning, where the clients collaborate via online knowledge distillation using a contrastive loss.
arXiv Detail & Related papers (2022-11-20T10:49:22Z) - BlindFL: Vertical Federated Machine Learning without Peeking into Your
Data [20.048695060411774]
Vertical federated learning (VFL) describes a case where ML models are built upon the private data of different participated parties.
We introduce BlindFL, a novel framework for VFL training and inference.
We show that BlindFL supports diverse datasets and models efficiently whilst achieving robust privacy guarantees.
arXiv Detail & Related papers (2022-06-16T07:26:50Z) - Game of Privacy: Towards Better Federated Platform Collaboration under
Privacy Restriction [95.12382372267724]
Vertical federated learning (VFL) aims to train models from cross-silo data with different feature spaces stored on different platforms.
Due to the intrinsic privacy risks of federated learning, the total amount of involved data may be constrained.
We propose to incent different platforms through a reciprocal collaboration, where all platforms can exploit multi-platform information in the VFL framework to benefit their own tasks.
arXiv Detail & Related papers (2022-02-10T16:45:40Z) - EFMVFL: An Efficient and Flexible Multi-party Vertical Federated
Learning without a Third Party [7.873139977724476]
Federated learning allows multiple participants to conduct joint modeling without disclosing their local data.
We propose a novel VFL framework without a third party called EFMVFL.
Our framework is secure, more efficient, and easy to be extended to multiple participants.
arXiv Detail & Related papers (2022-01-17T07:06:21Z) - Coding for Straggler Mitigation in Federated Learning [86.98177890676077]
The proposed scheme combines one-time padding to preserve privacy and gradient codes to yield resiliency against stragglers.
We show that the proposed scheme achieves a training speed-up factor of $6.6$ and $9.2$ on the MNIST and Fashion-MNIST datasets for an accuracy of $95%$ and $85%$, respectively.
arXiv Detail & Related papers (2021-09-30T15:53:35Z) - A Coupled Design of Exploiting Record Similarity for Practical Vertical
Federated Learning [47.77625754666018]
Federated learning is a learning paradigm to enable collaborative learning across different parties without revealing raw data.
Most existing studies in vertical federated learning disregard the "record linkage" process.
We design a novel coupled training paradigm, FedSim, that integrates one-to-many linkage into the training process.
arXiv Detail & Related papers (2021-06-11T11:09:53Z) - FedV: Privacy-Preserving Federated Learning over Vertically Partitioned
Data [12.815996963583641]
Federated learning (FL) has been proposed to allow collaborative training of machine learning (ML) models among multiple parties.
We propose FedV, a framework for secure gradient computation in vertical settings for several widely used ML models.
We show a reduction of 10%-70% of training time and 80% to 90% in data transfer with respect to the state-of-the-art approaches.
arXiv Detail & Related papers (2021-03-05T19:59:29Z) - Hybrid Differentially Private Federated Learning on Vertically
Partitioned Data [41.7896466307821]
We present HDP-VFL, the first hybrid differentially private (DP) framework for vertical federated learning (VFL)
We analyze how VFL's intermediate result (IR) can leak private information of the training data during communication.
We mathematically prove that our algorithm not only provides utility guarantees for VFL, but also offers multi-level privacy.
arXiv Detail & Related papers (2020-09-06T16:06:04Z) - Federated Doubly Stochastic Kernel Learning for Vertically Partitioned
Data [93.76907759950608]
We propose a doubly kernel learning algorithm for vertically partitioned data.
We show that FDSKL is significantly faster than state-of-the-art federated learning methods when dealing with kernels.
arXiv Detail & Related papers (2020-08-14T05:46:56Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.