Cloud-based Federated Boosting for Mobile Crowdsensing
- URL: http://arxiv.org/abs/2005.05304v1
- Date: Sat, 9 May 2020 08:49:01 GMT
- Title: Cloud-based Federated Boosting for Mobile Crowdsensing
- Authors: Zhuzhu Wang, Yilong Yang, Yang Liu, Ximeng Liu, Brij B. Gupta,
Jianfeng Ma
- Abstract summary: We propose a secret sharing based federated learning architecture FedXGB to achieve the privacy-preserving extreme gradient boosting for mobile crowdsensing.
Specifically, we first build a secure classification and regression tree (CART) of XGBoost using secret sharing.
Then, we propose a secure prediction protocol to protect the model privacy of XGBoost in mobile crowdsensing.
- Score: 29.546495197035366
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The application of federated extreme gradient boosting to mobile crowdsensing
apps brings several benefits, in particular high performance on efficiency and
classification. However, it also brings a new challenge for data and model
privacy protection. Besides it being vulnerable to Generative Adversarial
Network (GAN) based user data reconstruction attack, there is not the existing
architecture that considers how to preserve model privacy. In this paper, we
propose a secret sharing based federated learning architecture FedXGB to
achieve the privacy-preserving extreme gradient boosting for mobile
crowdsensing. Specifically, we first build a secure classification and
regression tree (CART) of XGBoost using secret sharing. Then, we propose a
secure prediction protocol to protect the model privacy of XGBoost in mobile
crowdsensing. We conduct a comprehensive theoretical analysis and extensive
experiments to evaluate the security, effectiveness, and efficiency of FedXGB.
The results indicate that FedXGB is secure against the honest-but-curious
adversaries and attains less than 1% accuracy loss compared with the original
XGBoost model.
Related papers
- Bilateral Differentially Private Vertical Federated Boosted Decision Trees [10.952674399412405]
Federated learning is a distributed machine learning paradigm that enables collaborative training across multiple parties while ensuring data privacy.
In this paper, we propose a variant of vertical federated XGBoost with bilateral differential privacy guarantee: MaskedXGBoost.
Our algorithm's superiority in both utility and efficiency has been validated on multiple datasets.
arXiv Detail & Related papers (2025-04-30T15:37:44Z) - Secure Federated XGBoost with CUDA-accelerated Homomorphic Encryption via NVIDIA FLARE [6.053716038605071]
Federated learning (FL) enables collaborative model training across decentralized datasets.
NVIDIA FLARE's Federated XGBoost extends the popular XGBoost algorithm to both vertical and horizontal federated settings.
Initial implementation assumed mutual trust over the sharing of intermediate statistics.
We introduce "Secure Federated XGBoost", an efficient solution to mitigate these risks.
arXiv Detail & Related papers (2025-04-04T20:08:24Z) - Privacy preserving layer partitioning for Deep Neural Network models [0.21470800327528838]
Trusted Execution Environments (TEEs) can introduce significant performance overhead due to additional layers of encryption, decryption, security and integrity checks.
We introduce layer partitioning technique and offloading computations to GPU.
We conduct experiments to demonstrate the effectiveness of our approach in protecting against input reconstruction attacks developed using trained conditional Generative Adversarial Network(c-GAN)
arXiv Detail & Related papers (2024-04-11T02:39:48Z) - Privacy Backdoors: Enhancing Membership Inference through Poisoning Pre-trained Models [112.48136829374741]
In this paper, we unveil a new vulnerability: the privacy backdoor attack.
When a victim fine-tunes a backdoored model, their training data will be leaked at a significantly higher rate than if they had fine-tuned a typical model.
Our findings highlight a critical privacy concern within the machine learning community and call for a reevaluation of safety protocols in the use of open-source pre-trained models.
arXiv Detail & Related papers (2024-04-01T16:50:54Z) - SPIN: Simulated Poisoning and Inversion Network for Federated
Learning-Based 6G Vehicular Networks [9.494669823390648]
Vehicular networks have always faced data privacy preservation concerns.
The technique is quite vulnerable to model inversion and model poisoning attacks.
We propose simulated poisoning and inversion network (SPIN) that leverages the optimization approach for reconstructing data.
arXiv Detail & Related papers (2022-11-21T10:07:13Z) - FLIP: A Provable Defense Framework for Backdoor Mitigation in Federated
Learning [66.56240101249803]
We study how hardening benign clients can affect the global model (and the malicious clients)
We propose a trigger reverse engineering based defense and show that our method can achieve improvement with guarantee robustness.
Our results on eight competing SOTA defense methods show the empirical superiority of our method on both single-shot and continuous FL backdoor attacks.
arXiv Detail & Related papers (2022-10-23T22:24:03Z) - FedDef: Defense Against Gradient Leakage in Federated Learning-based
Network Intrusion Detection Systems [15.39058389031301]
We propose two privacy evaluation metrics designed for FL-based NIDSs.
We propose FedDef, a novel optimization-based input perturbation defense strategy with theoretical guarantee.
We experimentally evaluate four existing defenses on four datasets and show that our defense outperforms all the baselines in terms of privacy protection.
arXiv Detail & Related papers (2022-10-08T15:23:30Z) - Federated Boosted Decision Trees with Differential Privacy [24.66980518231163]
We propose a general framework that captures and extends existing approaches for differentially private decision trees.
We show that with a careful choice of techniques it is possible to achieve very high utility while maintaining strong levels of privacy.
arXiv Detail & Related papers (2022-10-06T13:28:29Z) - Over-the-Air Federated Learning with Privacy Protection via Correlated
Additive Perturbations [57.20885629270732]
We consider privacy aspects of wireless federated learning with Over-the-Air (OtA) transmission of gradient updates from multiple users/agents to an edge server.
Traditional perturbation-based methods provide privacy protection while sacrificing the training accuracy.
In this work, we aim at minimizing privacy leakage to the adversary and the degradation of model accuracy at the edge server.
arXiv Detail & Related papers (2022-10-05T13:13:35Z) - Defending against Reconstruction Attacks with R\'enyi Differential
Privacy [72.1188520352079]
Reconstruction attacks allow an adversary to regenerate data samples of the training set using access to only a trained model.
Differential privacy is a known solution to such attacks, but is often used with a relatively large privacy budget.
We show that, for a same mechanism, we can derive privacy guarantees for reconstruction attacks that are better than the traditional ones from the literature.
arXiv Detail & Related papers (2022-02-15T18:09:30Z) - FedXGBoost: Privacy-Preserving XGBoost for Federated Learning [10.304484601250948]
Federated learning is the distributed machine learning framework that enables collaborative training across multiple parties while ensuring data privacy.
We propose two variants of federated XGBoost with privacy guarantee: FedXGBoost-SMM and FedXGBoost-LDP.
arXiv Detail & Related papers (2021-06-20T09:17:45Z) - An Efficient Learning Framework For Federated XGBoost Using Secret
Sharing And Distributed Optimization [47.70500612425959]
XGBoost is one of the most widely used machine learning models in the industry due to its superior learning accuracy and efficiency.
It is crucial to deploy a secure and efficient federated XGBoost (FedXGB) model to tackle data isolation issues in the big data problems.
In this paper, a multi-party federated XGB learning framework is proposed with a security guarantee, which reshapes the XGBoost's split criterion calculation process under a secret sharing setting.
Remarkably, a thorough analysis of model security is provided as well, and multiple numerical results showcase the superiority of the proposed FedXGB
arXiv Detail & Related papers (2021-05-12T15:04:18Z) - Provable Defense against Privacy Leakage in Federated Learning from
Representation Perspective [47.23145404191034]
Federated learning (FL) is a popular distributed learning framework that can reduce privacy risks by not explicitly sharing private data.
Recent works demonstrated that sharing model updates makes FL vulnerable to inference attacks.
We show our key observation that the data representation leakage from gradients is the essential cause of privacy leakage in FL.
arXiv Detail & Related papers (2020-12-08T20:42:12Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.