BlindFL: Vertical Federated Machine Learning without Peeking into Your
Data
- URL: http://arxiv.org/abs/2206.07975v1
- Date: Thu, 16 Jun 2022 07:26:50 GMT
- Title: BlindFL: Vertical Federated Machine Learning without Peeking into Your
Data
- Authors: Fangcheng Fu, Huanran Xue, Yong Cheng, Yangyu Tao, Bin Cui
- Abstract summary: Vertical federated learning (VFL) describes a case where ML models are built upon the private data of different participated parties.
We introduce BlindFL, a novel framework for VFL training and inference.
We show that BlindFL supports diverse datasets and models efficiently whilst achieving robust privacy guarantees.
- Score: 20.048695060411774
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Due to the rising concerns on privacy protection, how to build machine
learning (ML) models over different data sources with security guarantees is
gaining more popularity. Vertical federated learning (VFL) describes such a
case where ML models are built upon the private data of different participated
parties that own disjoint features for the same set of instances, which fits
many real-world collaborative tasks. Nevertheless, we find that existing
solutions for VFL either support limited kinds of input features or suffer from
potential data leakage during the federated execution. To this end, this paper
aims to investigate both the functionality and security of ML modes in the VFL
scenario.
To be specific, we introduce BlindFL, a novel framework for VFL training and
inference. First, to address the functionality of VFL models, we propose the
federated source layers to unite the data from different parties. Various kinds
of features can be supported efficiently by the federated source layers,
including dense, sparse, numerical, and categorical features. Second, we
carefully analyze the security during the federated execution and formalize the
privacy requirements. Based on the analysis, we devise secure and accurate
algorithm protocols, and further prove the security guarantees under the
ideal-real simulation paradigm. Extensive experiments show that BlindFL
supports diverse datasets and models efficiently whilst achieves robust privacy
guarantees.
Related papers
- UIFV: Data Reconstruction Attack in Vertical Federated Learning [5.404398887781436]
Vertical Federated Learning (VFL) facilitates collaborative machine learning without the need for participants to share raw private data.
Recent studies have revealed privacy risks where adversaries might reconstruct sensitive features through data leakage during the learning process.
Our work exposes severe privacy vulnerabilities within VFL systems that pose real threats to practical VFL applications.
arXiv Detail & Related papers (2024-06-18T13:18:52Z) - Vertical Federated Learning for Effectiveness, Security, Applicability: A Survey [67.48187503803847]
Vertical Federated Learning (VFL) is a privacy-preserving distributed learning paradigm.
Recent research has shown promising results addressing various challenges in VFL.
This survey offers a systematic overview of recent developments.
arXiv Detail & Related papers (2024-05-25T16:05:06Z) - Differentially Private Wireless Federated Learning Using Orthogonal
Sequences [56.52483669820023]
We propose a privacy-preserving uplink over-the-air computation (AirComp) method, termed FLORAS.
We prove that FLORAS offers both item-level and client-level differential privacy guarantees.
A new FL convergence bound is derived which, combined with the privacy guarantees, allows for a smooth tradeoff between the achieved convergence rate and differential privacy levels.
arXiv Detail & Related papers (2023-06-14T06:35:10Z) - Quadratic Functional Encryption for Secure Training in Vertical
Federated Learning [26.188083606166806]
Vertical federated learning (VFL) enables the collaborative training of machine learning (ML) models in settings where the data is distributed amongst multiple parties.
In VFL, the labels are available to a single party and the complete feature set is formed only when data from all parties is combined.
Recently, Xu et al. proposed a new framework called FedV for secure gradient computation for VFL using multi-input functional encryption.
arXiv Detail & Related papers (2023-05-15T05:31:35Z) - FedSDG-FS: Efficient and Secure Feature Selection for Vertical Federated
Learning [21.79965380400454]
Vertical Learning (VFL) enables multiple data owners, each holding a different subset of features about largely overlapping sets of data sample(s) to jointly train a useful global model.
Feature selection (FS) is important to VFL. It is still an open research problem as existing FS works designed for VFL either assumes prior knowledge on the number of noisy features or prior knowledge on the post-training threshold of useful features.
We propose the Federated Dual-Gate based Feature Selection (FedSDG-FS) approach. It consists of a Gaussian dual-gate to efficiently approximate the probability of a feature being selected, with privacy
arXiv Detail & Related papers (2023-02-21T03:09:45Z) - Desirable Companion for Vertical Federated Learning: New Zeroth-Order
Gradient Based Algorithm [140.25480610981504]
A complete list of metrics to evaluate VFL algorithms should include model applicability, privacy, communication, and computation efficiency.
We propose a novel VFL framework with black-box scalability, which is inseparably inseparably scalable.
arXiv Detail & Related papers (2022-03-19T13:55:47Z) - Do Gradient Inversion Attacks Make Federated Learning Unsafe? [70.0231254112197]
Federated learning (FL) allows the collaborative training of AI models without needing to share raw data.
Recent works on the inversion of deep neural networks from model gradients raised concerns about the security of FL in preventing the leakage of training data.
In this work, we show that these attacks presented in the literature are impractical in real FL use-cases and provide a new baseline attack.
arXiv Detail & Related papers (2022-02-14T18:33:12Z) - EFMVFL: An Efficient and Flexible Multi-party Vertical Federated
Learning without a Third Party [7.873139977724476]
Federated learning allows multiple participants to conduct joint modeling without disclosing their local data.
We propose a novel VFL framework without a third party called EFMVFL.
Our framework is secure, more efficient, and easy to be extended to multiple participants.
arXiv Detail & Related papers (2022-01-17T07:06:21Z) - RoFL: Attestable Robustness for Secure Federated Learning [59.63865074749391]
Federated Learning allows a large number of clients to train a joint model without the need to share their private data.
To ensure the confidentiality of the client updates, Federated Learning systems employ secure aggregation.
We present RoFL, a secure Federated Learning system that improves robustness against malicious clients.
arXiv Detail & Related papers (2021-07-07T15:42:49Z) - Secure Bilevel Asynchronous Vertical Federated Learning with Backward
Updating [159.48259714642447]
Vertical scalable learning (VFL) attracts increasing attention due to the demands of multi-party collaborative modeling and concerns of privacy leakage.
We propose a novel bftextlevel parallel architecture (VF$bfB2$), under which three new algorithms, including VF$B2$, are proposed.
arXiv Detail & Related papers (2021-03-01T12:34:53Z) - Hybrid Differentially Private Federated Learning on Vertically
Partitioned Data [41.7896466307821]
We present HDP-VFL, the first hybrid differentially private (DP) framework for vertical federated learning (VFL)
We analyze how VFL's intermediate result (IR) can leak private information of the training data during communication.
We mathematically prove that our algorithm not only provides utility guarantees for VFL, but also offers multi-level privacy.
arXiv Detail & Related papers (2020-09-06T16:06:04Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.