Towards Achieving Near-optimal Utility for Privacy-Preserving Federated
Learning via Data Generation and Parameter Distortion
- URL: http://arxiv.org/abs/2305.04288v3
- Date: Thu, 7 Mar 2024 07:27:17 GMT
- Title: Towards Achieving Near-optimal Utility for Privacy-Preserving Federated
Learning via Data Generation and Parameter Distortion
- Authors: Xiaojin Zhang, Kai Chen, Qiang Yang
- Abstract summary: Federated learning (FL) enables participating parties to collaboratively build a global model with boosted utility without disclosing private data information.
Various protection mechanisms have to be adopted to fulfill the requirements in preserving textitprivacy and maintaining high model textitutility
- Score: 19.691227962303515
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Federated learning (FL) enables participating parties to collaboratively
build a global model with boosted utility without disclosing private data
information. Appropriate protection mechanisms have to be adopted to fulfill
the requirements in preserving \textit{privacy} and maintaining high model
\textit{utility}. The nature of the widely-adopted protection mechanisms
including \textit{Randomization Mechanism} and \textit{Compression Mechanism}
is to protect privacy via distorting model parameter. We measure the utility
via the gap between the original model parameter and the distorted model
parameter. We want to identify under what general conditions privacy-preserving
federated learning can achieve near-optimal utility via data generation and
parameter distortion. To provide an avenue for achieving near-optimal utility,
we present an upper bound for utility loss, which is measured using two main
terms called variance-reduction and model parameter discrepancy separately. Our
analysis inspires the design of appropriate protection parameters for the
protection mechanisms to achieve near-optimal utility and meet the privacy
requirements simultaneously. The main techniques for the protection mechanism
include parameter distortion and data generation, which are generic and can be
applied extensively. Furthermore, we provide an upper bound for the trade-off
between privacy and utility, \blue{which together with the lower bound provided
by no free lunch theorem in federated learning (\cite{zhang2022no}) form the
conditions for achieving optimal trade-off.
Related papers
- Pseudo-Probability Unlearning: Towards Efficient and Privacy-Preserving Machine Unlearning [59.29849532966454]
We propose PseudoProbability Unlearning (PPU), a novel method that enables models to forget data to adhere to privacy-preserving manner.
Our method achieves over 20% improvements in forgetting error compared to the state-of-the-art.
arXiv Detail & Related papers (2024-11-04T21:27:06Z) - Immersion and Invariance-based Coding for Privacy-Preserving Federated
Learning [1.5989047000011911]
Federated learning (FL) has emerged as a method to preserve privacy in collaborative distributed learning.
We introduce a privacy-preserving FL framework that combines differential privacy and system immersion tools from control theory.
We demonstrate that the proposed privacy-preserving scheme can be tailored to offer any desired level of differential privacy for both local and global model parameters.
arXiv Detail & Related papers (2024-09-25T15:04:42Z) - CorBin-FL: A Differentially Private Federated Learning Mechanism using Common Randomness [6.881974834597426]
Federated learning (FL) has emerged as a promising framework for distributed machine learning.
We introduce CorBin-FL, a privacy mechanism that uses correlated binary quantization to achieve differential privacy.
We also propose AugCorBin-FL, an extension that, in addition to PLDP, user-level and sample-level central differential privacy guarantees.
arXiv Detail & Related papers (2024-09-20T00:23:44Z) - Privacy Amplification for the Gaussian Mechanism via Bounded Support [64.86780616066575]
Data-dependent privacy accounting frameworks such as per-instance differential privacy (pDP) and Fisher information loss (FIL) confer fine-grained privacy guarantees for individuals in a fixed training dataset.
We propose simple modifications of the Gaussian mechanism with bounded support, showing that they amplify privacy guarantees under data-dependent accounting.
arXiv Detail & Related papers (2024-03-07T21:22:07Z) - Unified Mechanism-Specific Amplification by Subsampling and Group Privacy Amplification [54.1447806347273]
Amplification by subsampling is one of the main primitives in machine learning with differential privacy.
We propose the first general framework for deriving mechanism-specific guarantees.
We analyze how subsampling affects the privacy of groups of multiple users.
arXiv Detail & Related papers (2024-03-07T19:36:05Z) - A Meta-learning Framework for Tuning Parameters of Protection Mechanisms
in Trustworthy Federated Learning [27.909662318838873]
Trustworthy Federated Learning (TFL) typically leverages protection mechanisms to guarantee privacy.
We propose a framework that formulates TFL as a problem of finding a protection mechanism to optimize the tradeoff between privacy leakage, utility loss, and efficiency reduction.
arXiv Detail & Related papers (2023-05-28T15:01:18Z) - Theoretically Principled Federated Learning for Balancing Privacy and
Utility [61.03993520243198]
We propose a general learning framework for the protection mechanisms that protects privacy via distorting model parameters.
It can achieve personalized utility-privacy trade-off for each model parameter, on each client, at each communication round in federated learning.
arXiv Detail & Related papers (2023-05-24T13:44:02Z) - Breaking the Communication-Privacy-Accuracy Tradeoff with
$f$-Differential Privacy [51.11280118806893]
We consider a federated data analytics problem in which a server coordinates the collaborative data analysis of multiple users with privacy concerns and limited communication capability.
We study the local differential privacy guarantees of discrete-valued mechanisms with finite output space through the lens of $f$-differential privacy (DP)
More specifically, we advance the existing literature by deriving tight $f$-DP guarantees for a variety of discrete-valued mechanisms.
arXiv Detail & Related papers (2023-02-19T16:58:53Z) - Trading Off Privacy, Utility and Efficiency in Federated Learning [22.53326117450263]
We formulate and quantify the trade-offs between privacy leakage, utility loss, and efficiency reduction.
We analyze the lower bounds for the privacy leakage, utility loss and efficiency reduction for several widely-adopted protection mechanisms.
arXiv Detail & Related papers (2022-09-01T05:20:04Z) - Just Fine-tune Twice: Selective Differential Privacy for Large Language
Models [69.66654761324702]
We propose a simple yet effective just-fine-tune-twice privacy mechanism to achieve SDP for large Transformer-based language models.
Experiments show that our models achieve strong performance while staying robust to the canary insertion attack.
arXiv Detail & Related papers (2022-04-15T22:36:55Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.