A Meta-learning Framework for Tuning Parameters of Protection Mechanisms
in Trustworthy Federated Learning
- URL: http://arxiv.org/abs/2305.18400v3
- Date: Wed, 28 Feb 2024 13:45:53 GMT
- Title: A Meta-learning Framework for Tuning Parameters of Protection Mechanisms
in Trustworthy Federated Learning
- Authors: Xiaojin Zhang, Yan Kang, Lixin Fan, Kai Chen, Qiang Yang
- Abstract summary: Trustworthy Federated Learning (TFL) typically leverages protection mechanisms to guarantee privacy.
We propose a framework that formulates TFL as a problem of finding a protection mechanism to optimize the tradeoff between privacy leakage, utility loss, and efficiency reduction.
- Score: 27.909662318838873
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Trustworthy Federated Learning (TFL) typically leverages protection
mechanisms to guarantee privacy. However, protection mechanisms inevitably
introduce utility loss or efficiency reduction while protecting data privacy.
Therefore, protection mechanisms and their parameters should be carefully
chosen to strike an optimal tradeoff between \textit{privacy leakage},
\textit{utility loss}, and \textit{efficiency reduction}. To this end,
federated learning practitioners need tools to measure the three factors and
optimize the tradeoff between them to choose the protection mechanism that is
most appropriate to the application at hand. Motivated by this requirement, we
propose a framework that (1) formulates TFL as a problem of finding a
protection mechanism to optimize the tradeoff between privacy leakage, utility
loss, and efficiency reduction and (2) formally defines bounded measurements of
the three factors. We then propose a meta-learning algorithm to approximate
this optimization problem and find optimal protection parameters for
representative protection mechanisms, including Randomization, Homomorphic
Encryption, Secret Sharing, and Compression. We further design estimation
algorithms to quantify these found optimal protection parameters in a practical
horizontal federated learning setting and provide a theoretical analysis of the
estimation error.
Related papers
- One-Shot Safety Alignment for Large Language Models via Optimal Dualization [64.52223677468861]
This paper presents a perspective of dualization that reduces constrained alignment to an equivalent unconstrained alignment problem.
We do so by pre-optimizing a smooth and convex dual function that has a closed form.
Our strategy leads to two practical algorithms in model-based and preference-based settings.
arXiv Detail & Related papers (2024-05-29T22:12:52Z) - SecureBoost Hyperparameter Tuning via Multi-Objective Federated Learning [23.196686101682737]
SecureBoost is a tree-boosting algorithm leveraging homomorphic encryption to protect data privacy in vertical federated learning setting.
SecureBoost suffers from high computational complexity and risk of label leakage.
We propose a Constrained Multi-Objective SecureBoost (CMOSB) algorithm to find optimal solutions.
arXiv Detail & Related papers (2023-07-20T04:45:59Z) - Theoretically Principled Federated Learning for Balancing Privacy and
Utility [61.03993520243198]
We propose a general learning framework for the protection mechanisms that protects privacy via distorting model parameters.
It can achieve personalized utility-privacy trade-off for each model parameter, on each client, at each communication round in federated learning.
arXiv Detail & Related papers (2023-05-24T13:44:02Z) - Towards Achieving Near-optimal Utility for Privacy-Preserving Federated
Learning via Data Generation and Parameter Distortion [19.691227962303515]
Federated learning (FL) enables participating parties to collaboratively build a global model with boosted utility without disclosing private data information.
Various protection mechanisms have to be adopted to fulfill the requirements in preserving textitprivacy and maintaining high model textitutility
arXiv Detail & Related papers (2023-05-07T14:34:15Z) - Optimizing Privacy, Utility and Efficiency in Constrained
Multi-Objective Federated Learning [20.627157142499378]
We develop two improved CMOFL algorithms based on NSGA-II and PSL.
We design specific measurements of privacy leakage, utility loss, and training cost for three privacy protection mechanisms.
Empirical experiments conducted under each of the three protection mechanisms demonstrate the effectiveness of our proposed algorithms.
arXiv Detail & Related papers (2023-04-29T17:55:38Z) - Breaking the Communication-Privacy-Accuracy Tradeoff with
$f$-Differential Privacy [51.11280118806893]
We consider a federated data analytics problem in which a server coordinates the collaborative data analysis of multiple users with privacy concerns and limited communication capability.
We study the local differential privacy guarantees of discrete-valued mechanisms with finite output space through the lens of $f$-differential privacy (DP)
More specifically, we advance the existing literature by deriving tight $f$-DP guarantees for a variety of discrete-valued mechanisms.
arXiv Detail & Related papers (2023-02-19T16:58:53Z) - Trading Off Privacy, Utility and Efficiency in Federated Learning [22.53326117450263]
We formulate and quantify the trade-offs between privacy leakage, utility loss, and efficiency reduction.
We analyze the lower bounds for the privacy leakage, utility loss and efficiency reduction for several widely-adopted protection mechanisms.
arXiv Detail & Related papers (2022-09-01T05:20:04Z) - Is Vertical Logistic Regression Privacy-Preserving? A Comprehensive
Privacy Analysis and Beyond [57.10914865054868]
We consider vertical logistic regression (VLR) trained with mini-batch descent gradient.
We provide a comprehensive and rigorous privacy analysis of VLR in a class of open-source Federated Learning frameworks.
arXiv Detail & Related papers (2022-07-19T05:47:30Z) - Decentralized Stochastic Optimization with Inherent Privacy Protection [103.62463469366557]
Decentralized optimization is the basic building block of modern collaborative machine learning, distributed estimation and control, and large-scale sensing.
Since involved data, privacy protection has become an increasingly pressing need in the implementation of decentralized optimization algorithms.
arXiv Detail & Related papers (2022-05-08T14:38:23Z) - Covert Model Poisoning Against Federated Learning: Algorithm Design and
Optimization [76.51980153902774]
Federated learning (FL) is vulnerable to external attacks on FL models during parameters transmissions.
In this paper, we propose effective MP algorithms to combat state-of-the-art defensive aggregation mechanisms.
Our experimental results demonstrate that the proposed CMP algorithms are effective and substantially outperform existing attack mechanisms.
arXiv Detail & Related papers (2021-01-28T03:28:18Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.