EFMVFL: An Efficient and Flexible Multi-party Vertical Federated
Learning without a Third Party
- URL: http://arxiv.org/abs/2201.06244v1
- Date: Mon, 17 Jan 2022 07:06:21 GMT
- Title: EFMVFL: An Efficient and Flexible Multi-party Vertical Federated
Learning without a Third Party
- Authors: Yimin Huang, Xinyu Feng, Wanwan Wang, Hao He, Yukun Wang, Ming Yao
- Abstract summary: Federated learning allows multiple participants to conduct joint modeling without disclosing their local data.
We propose a novel VFL framework without a third party called EFMVFL.
Our framework is secure, more efficient, and easy to be extended to multiple participants.
- Score: 7.873139977724476
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Federated learning allows multiple participants to conduct joint modeling
without disclosing their local data. Vertical federated learning (VFL) handles
the situation where participants share the same ID space and different feature
spaces. In most VFL frameworks, to protect the security and privacy of the
participants' local data, a third party is needed to generate homomorphic
encryption key pairs and perform decryption operations. In this way, the third
party is granted the right to decrypt information related to model parameters.
However, it isn't easy to find such a credible entity in the real world.
Existing methods for solving this problem are either communication-intensive or
unsuitable for multi-party scenarios. By combining secret sharing and
homomorphic encryption, we propose a novel VFL framework without a third party
called EFMVFL, which supports flexible expansion to multiple participants with
low communication overhead and is applicable to generalized linear models. We
give instantiations of our framework under logistic regression and Poisson
regression. Theoretical analysis and experiments show that our framework is
secure, more efficient, and easy to be extended to multiple participants.
Related papers
- Federated Transformer: Multi-Party Vertical Federated Learning on Practical Fuzzily Linked Data [27.073959939557362]
We introduce the Federated Transformer (FeT), a novel framework that supports multi-party fuzzy VFL with fuzzy identifiers.
Our experiments demonstrate that the FeT surpasses the baseline models by up to 46% in terms of accuracy when scaled to 50 parties.
In two-party fuzzy VFL settings, FeT also shows improved performance and privacy over cutting-edge VFL models.
arXiv Detail & Related papers (2024-10-23T16:00:14Z) - Towards Active Participant-Centric Vertical Federated Learning: Some Representations May Be All You Need [0.0]
We introduce a novel simplified approach to Vertical Federated Learning (VFL)
Active Participant-Centric VFL allows the active participant to do inference in a non collaborative fashion.
This method integrates unsupervised representation learning with knowledge distillation to achieve comparable accuracy to traditional VFL methods.
arXiv Detail & Related papers (2024-10-23T08:07:00Z) - Tunable Soft Prompts are Messengers in Federated Learning [55.924749085481544]
Federated learning (FL) enables multiple participants to collaboratively train machine learning models using decentralized data sources.
The lack of model privacy protection in FL becomes an unneglectable challenge.
We propose a novel FL training approach that accomplishes information exchange among participants via tunable soft prompts.
arXiv Detail & Related papers (2023-11-12T11:01:10Z) - Quadratic Functional Encryption for Secure Training in Vertical
Federated Learning [26.188083606166806]
Vertical federated learning (VFL) enables the collaborative training of machine learning (ML) models in settings where the data is distributed amongst multiple parties.
In VFL, the labels are available to a single party and the complete feature set is formed only when data from all parties is combined.
Recently, Xu et al. proposed a new framework called FedV for secure gradient computation for VFL using multi-input functional encryption.
arXiv Detail & Related papers (2023-05-15T05:31:35Z) - Hijack Vertical Federated Learning Models As One Party [43.095945038428404]
Vertical federated learning (VFL) is an emerging paradigm that enables collaborators to build machine learning models together in a distributed fashion.
Existing VFL frameworks use cryptographic techniques to provide data privacy and security guarantees.
arXiv Detail & Related papers (2022-12-01T07:12:38Z) - BlindFL: Vertical Federated Machine Learning without Peeking into Your
Data [20.048695060411774]
Vertical federated learning (VFL) describes a case where ML models are built upon the private data of different participated parties.
We introduce BlindFL, a novel framework for VFL training and inference.
We show that BlindFL supports diverse datasets and models efficiently whilst achieving robust privacy guarantees.
arXiv Detail & Related papers (2022-06-16T07:26:50Z) - Efficient Split-Mix Federated Learning for On-Demand and In-Situ
Customization [107.72786199113183]
Federated learning (FL) provides a distributed learning framework for multiple participants to collaborate learning without sharing raw data.
In this paper, we propose a novel Split-Mix FL strategy for heterogeneous participants that, once training is done, provides in-situ customization of model sizes and robustness.
arXiv Detail & Related papers (2022-03-18T04:58:34Z) - RoFL: Attestable Robustness for Secure Federated Learning [59.63865074749391]
Federated Learning allows a large number of clients to train a joint model without the need to share their private data.
To ensure the confidentiality of the client updates, Federated Learning systems employ secure aggregation.
We present RoFL, a secure Federated Learning system that improves robustness against malicious clients.
arXiv Detail & Related papers (2021-07-07T15:42:49Z) - Secure Bilevel Asynchronous Vertical Federated Learning with Backward
Updating [159.48259714642447]
Vertical scalable learning (VFL) attracts increasing attention due to the demands of multi-party collaborative modeling and concerns of privacy leakage.
We propose a novel bftextlevel parallel architecture (VF$bfB2$), under which three new algorithms, including VF$B2$, are proposed.
arXiv Detail & Related papers (2021-03-01T12:34:53Z) - FedH2L: Federated Learning with Model and Statistical Heterogeneity [75.61234545520611]
Federated learning (FL) enables distributed participants to collectively learn a strong global model without sacrificing their individual data privacy.
We introduce FedH2L, which is agnostic to both the model architecture and robust to different data distributions across participants.
In contrast to approaches sharing parameters or gradients, FedH2L relies on mutual distillation, exchanging only posteriors on a shared seed set between participants in a decentralized manner.
arXiv Detail & Related papers (2021-01-27T10:10:18Z) - WAFFLe: Weight Anonymized Factorization for Federated Learning [88.44939168851721]
In domains where data are sensitive or private, there is great value in methods that can learn in a distributed manner without the data ever leaving the local devices.
We propose Weight Anonymized Factorization for Federated Learning (WAFFLe), an approach that combines the Indian Buffet Process with a shared dictionary of weight factors for neural networks.
arXiv Detail & Related papers (2020-08-13T04:26:31Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.