SecureBoost Hyperparameter Tuning via Multi-Objective Federated Learning
- URL: http://arxiv.org/abs/2307.10579v3
- Date: Tue, 8 Aug 2023 01:32:52 GMT
- Title: SecureBoost Hyperparameter Tuning via Multi-Objective Federated Learning
- Authors: Ziyao Ren, Yan Kang, Lixin Fan, Linghua Yang, Yongxin Tong and Qiang
Yang
- Abstract summary: SecureBoost is a tree-boosting algorithm leveraging homomorphic encryption to protect data privacy in vertical federated learning setting.
SecureBoost suffers from high computational complexity and risk of label leakage.
We propose a Constrained Multi-Objective SecureBoost (CMOSB) algorithm to find optimal solutions.
- Score: 23.196686101682737
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: SecureBoost is a tree-boosting algorithm leveraging homomorphic encryption to
protect data privacy in vertical federated learning setting. It is widely used
in fields such as finance and healthcare due to its interpretability,
effectiveness, and privacy-preserving capability. However, SecureBoost suffers
from high computational complexity and risk of label leakage. To harness the
full potential of SecureBoost, hyperparameters of SecureBoost should be
carefully chosen to strike an optimal balance between utility, efficiency, and
privacy. Existing methods either set hyperparameters empirically or
heuristically, which are far from optimal. To fill this gap, we propose a
Constrained Multi-Objective SecureBoost (CMOSB) algorithm to find Pareto
optimal solutions that each solution is a set of hyperparameters achieving
optimal tradeoff between utility loss, training cost, and privacy leakage. We
design measurements of the three objectives. In particular, the privacy leakage
is measured using our proposed instance clustering attack. Experimental results
demonstrate that the CMOSB yields not only hyperparameters superior to the
baseline but also optimal sets of hyperparameters that can support the flexible
requirements of FL participants.
Related papers
- Hyperparameter Optimization for SecureBoost via Constrained Multi-Objective Federated Learning [26.00375717103131]
We find that SecureBoost and some of its variants are still vulnerable to label leakage.
This vulnerability may lead to a suboptimal trade-off between utility, privacy, and efficiency.
We propose the Constrained Multi-Objective SecureBoost (CMOSB) algorithm to approximate optimal solutions.
arXiv Detail & Related papers (2024-04-06T03:46:42Z) - TernaryVote: Differentially Private, Communication Efficient, and
Byzantine Resilient Distributed Optimization on Heterogeneous Data [50.797729676285876]
We propose TernaryVote, which combines a ternary compressor and the majority vote mechanism to realize differential privacy, gradient compression, and Byzantine resilience simultaneously.
We theoretically quantify the privacy guarantee through the lens of the emerging f-differential privacy (DP) and the Byzantine resilience of the proposed algorithm.
arXiv Detail & Related papers (2024-02-16T16:41:14Z) - DP-HyPO: An Adaptive Private Hyperparameter Optimization Framework [31.628466186344582]
We introduce DP-HyPO, a pioneering framework for adaptive'' private hyperparameter optimization.
We provide a comprehensive differential privacy analysis of our framework.
We empirically demonstrate the effectiveness of DP-HyPO on a diverse set of real-world datasets.
arXiv Detail & Related papers (2023-06-09T07:55:46Z) - Theoretically Principled Federated Learning for Balancing Privacy and
Utility [61.03993520243198]
We propose a general learning framework for the protection mechanisms that protects privacy via distorting model parameters.
It can achieve personalized utility-privacy trade-off for each model parameter, on each client, at each communication round in federated learning.
arXiv Detail & Related papers (2023-05-24T13:44:02Z) - Meta-Learning Priors for Safe Bayesian Optimization [72.8349503901712]
We build on a meta-learning algorithm, F-PACOH, capable of providing reliable uncertainty quantification in settings of data scarcity.
As core contribution, we develop a novel framework for choosing safety-compliant priors in a data-riven manner.
On benchmark functions and a high-precision motion system, we demonstrate that our meta-learned priors accelerate the convergence of safe BO approaches.
arXiv Detail & Related papers (2022-10-03T08:38:38Z) - Trading Off Privacy, Utility and Efficiency in Federated Learning [22.53326117450263]
We formulate and quantify the trade-offs between privacy leakage, utility loss, and efficiency reduction.
We analyze the lower bounds for the privacy leakage, utility loss and efficiency reduction for several widely-adopted protection mechanisms.
arXiv Detail & Related papers (2022-09-01T05:20:04Z) - Log Barriers for Safe Black-box Optimization with Application to Safe
Reinforcement Learning [72.97229770329214]
We introduce a general approach for seeking high dimensional non-linear optimization problems in which maintaining safety during learning is crucial.
Our approach called LBSGD is based on applying a logarithmic barrier approximation with a carefully chosen step size.
We demonstrate the effectiveness of our approach on minimizing violation in policy tasks in safe reinforcement learning.
arXiv Detail & Related papers (2022-07-21T11:14:47Z) - Decentralized Stochastic Optimization with Inherent Privacy Protection [103.62463469366557]
Decentralized optimization is the basic building block of modern collaborative machine learning, distributed estimation and control, and large-scale sensing.
Since involved data, privacy protection has become an increasingly pressing need in the implementation of decentralized optimization algorithms.
arXiv Detail & Related papers (2022-05-08T14:38:23Z) - Towards Hyperparameter-free Policy Selection for Offline Reinforcement
Learning [10.457660611114457]
We show how to select between policies and value functions produced by different training algorithms in offline reinforcement learning.
We use BVFT [XJ21], a recent theoretical advance in value-function selection, and demonstrate their effectiveness in discrete-action benchmarks such as Atari.
arXiv Detail & Related papers (2021-10-26T20:12:11Z) - Optimizing Large-Scale Hyperparameters via Automated Learning Algorithm [97.66038345864095]
We propose a new hyperparameter optimization method with zeroth-order hyper-gradients (HOZOG)
Specifically, we first formulate hyperparameter optimization as an A-based constrained optimization problem.
Then, we use the average zeroth-order hyper-gradients to update hyper parameters.
arXiv Detail & Related papers (2021-02-17T21:03:05Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.