Theoretically Principled Federated Learning for Balancing Privacy and
Utility
- URL: http://arxiv.org/abs/2305.15148v2
- Date: Sat, 3 Jun 2023 12:35:57 GMT
- Title: Theoretically Principled Federated Learning for Balancing Privacy and
Utility
- Authors: Xiaojin Zhang, Wenjie Li, Kai Chen, Shutao Xia, Qiang Yang
- Abstract summary: We propose a general learning framework for the protection mechanisms that protects privacy via distorting model parameters.
It can achieve personalized utility-privacy trade-off for each model parameter, on each client, at each communication round in federated learning.
- Score: 61.03993520243198
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We propose a general learning framework for the protection mechanisms that
protects privacy via distorting model parameters, which facilitates the
trade-off between privacy and utility. The algorithm is applicable to arbitrary
privacy measurements that maps from the distortion to a real value. It can
achieve personalized utility-privacy trade-off for each model parameter, on
each client, at each communication round in federated learning. Such adaptive
and fine-grained protection can improve the effectiveness of privacy-preserved
federated learning.
Theoretically, we show that gap between the utility loss of the protection
hyperparameter output by our algorithm and that of the optimal protection
hyperparameter is sub-linear in the total number of iterations. The
sublinearity of our algorithm indicates that the average gap between the
performance of our algorithm and that of the optimal performance goes to zero
when the number of iterations goes to infinity. Further, we provide the
convergence rate of our proposed algorithm. We conduct empirical results on
benchmark datasets to verify that our method achieves better utility than the
baseline methods under the same privacy budget.
Related papers
- Pseudo-Probability Unlearning: Towards Efficient and Privacy-Preserving Machine Unlearning [59.29849532966454]
We propose PseudoProbability Unlearning (PPU), a novel method that enables models to forget data to adhere to privacy-preserving manner.
Our method achieves over 20% improvements in forgetting error compared to the state-of-the-art.
arXiv Detail & Related papers (2024-11-04T21:27:06Z) - Immersion and Invariance-based Coding for Privacy-Preserving Federated Learning [1.4226399196408985]
Federated learning (FL) has emerged as a method to preserve privacy in collaborative distributed learning.
We introduce a privacy-preserving FL framework that combines differential privacy and system immersion tools from control theory.
We demonstrate that the proposed privacy-preserving scheme can be tailored to offer any desired level of differential privacy for both local and global model parameters.
arXiv Detail & Related papers (2024-09-25T15:04:42Z) - Efficient Privacy-Preserving KAN Inference Using Homomorphic Encryption [9.0993556073886]
Homomorphic encryption (HE) facilitates privacy-preserving inference for deep learning models.
Complex structure of KANs, incorporating nonlinear elements like the SiLU activation function and B-spline functions, renders existing privacy-preserving inference techniques inadequate.
We propose an accurate and efficient privacy-preserving inference scheme tailored for KANs.
arXiv Detail & Related papers (2024-09-12T04:51:27Z) - TernaryVote: Differentially Private, Communication Efficient, and
Byzantine Resilient Distributed Optimization on Heterogeneous Data [50.797729676285876]
We propose TernaryVote, which combines a ternary compressor and the majority vote mechanism to realize differential privacy, gradient compression, and Byzantine resilience simultaneously.
We theoretically quantify the privacy guarantee through the lens of the emerging f-differential privacy (DP) and the Byzantine resilience of the proposed algorithm.
arXiv Detail & Related papers (2024-02-16T16:41:14Z) - Towards Achieving Near-optimal Utility for Privacy-Preserving Federated
Learning via Data Generation and Parameter Distortion [19.691227962303515]
Federated learning (FL) enables participating parties to collaboratively build a global model with boosted utility without disclosing private data information.
Various protection mechanisms have to be adopted to fulfill the requirements in preserving textitprivacy and maintaining high model textitutility
arXiv Detail & Related papers (2023-05-07T14:34:15Z) - Differentially Private Stochastic Gradient Descent with Low-Noise [49.981789906200035]
Modern machine learning algorithms aim to extract fine-grained information from data to provide accurate predictions, which often conflicts with the goal of privacy protection.
This paper addresses the practical and theoretical importance of developing privacy-preserving machine learning algorithms that ensure good performance while preserving privacy.
arXiv Detail & Related papers (2022-09-09T08:54:13Z) - Decentralized Stochastic Optimization with Inherent Privacy Protection [103.62463469366557]
Decentralized optimization is the basic building block of modern collaborative machine learning, distributed estimation and control, and large-scale sensing.
Since involved data, privacy protection has become an increasingly pressing need in the implementation of decentralized optimization algorithms.
arXiv Detail & Related papers (2022-05-08T14:38:23Z) - Privacy Preserving Recalibration under Domain Shift [119.21243107946555]
We introduce a framework that abstracts out the properties of recalibration problems under differential privacy constraints.
We also design a novel recalibration algorithm, accuracy temperature scaling, that outperforms prior work on private datasets.
arXiv Detail & Related papers (2020-08-21T18:43:37Z) - Federated Learning with Sparsification-Amplified Privacy and Adaptive
Optimization [27.243322019117144]
Federated learning (FL) enables distributed agents to collaboratively learn a centralized model without sharing their raw data with each other.
We propose a new FL framework with sparsification-amplified privacy.
Our approach integrates random sparsification with gradient perturbation on each agent to amplify privacy guarantee.
arXiv Detail & Related papers (2020-08-01T20:22:57Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.