Majority Vote for Distributed Differentially Private Sign Selection
- URL: http://arxiv.org/abs/2209.04419v2
- Date: Tue, 4 Jun 2024 15:11:25 GMT
- Title: Majority Vote for Distributed Differentially Private Sign Selection
- Authors: Weidong Liu, Jiyuan Tu, Xiaojun Mao, Xi Chen,
- Abstract summary: We propose a distributed group differentially private Majority Vote mechanism, for the sign selection problem in a distributed setup.
For enhanced applicability, we study the private sign selection for mean estimation and linear regression problems.
- Score: 9.682477614512157
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Privacy-preserving data analysis has become more prevalent in recent years. In this study, we propose a distributed group differentially private Majority Vote mechanism, for the sign selection problem in a distributed setup. To achieve this, we apply the iterative peeling to the stability function and use the exponential mechanism to recover the signs. For enhanced applicability, we study the private sign selection for mean estimation and linear regression problems, in distributed systems. Our method recovers the support and signs with the optimal signal-to-noise ratio as in the non-private scenario, which is better than contemporary works of private variable selections. Moreover, the sign selection consistency is justified by theoretical guarantees. Simulation studies are conducted to demonstrate the effectiveness of the proposed method.
Related papers
- An Efficient Difference-of-Convex Solver for Privacy Funnel [3.069335774032178]
We propose an efficient solver for the privacy funnel (PF) method.
The proposed DC separation results in a closed-form update equation.
We evaluate the proposed solver with MNIST and Fashion datasets.
arXiv Detail & Related papers (2024-03-02T01:05:25Z) - TernaryVote: Differentially Private, Communication Efficient, and
Byzantine Resilient Distributed Optimization on Heterogeneous Data [50.797729676285876]
We propose TernaryVote, which combines a ternary compressor and the majority vote mechanism to realize differential privacy, gradient compression, and Byzantine resilience simultaneously.
We theoretically quantify the privacy guarantee through the lens of the emerging f-differential privacy (DP) and the Byzantine resilience of the proposed algorithm.
arXiv Detail & Related papers (2024-02-16T16:41:14Z) - Learning Fair Policies for Multi-stage Selection Problems from
Observational Data [4.282745020665833]
We consider the problem of learning fair policies for multi-stage selection problems from observational data.
This problem arises in several high-stakes domains such as company hiring, loan approval, or bail decisions where outcomes are only observed for those selected.
We propose a multi-stage framework that can be augmented with various fairness constraints, such as demographic parity or equal opportunity.
arXiv Detail & Related papers (2023-12-20T16:33:15Z) - Optimal Unbiased Randomizers for Regression with Label Differential
Privacy [61.63619647307816]
We propose a new family of label randomizers for training regression models under the constraint of label differential privacy (DP)
We demonstrate that these randomizers achieve state-of-the-art privacy-utility trade-offs on several datasets.
arXiv Detail & Related papers (2023-12-09T19:58:34Z) - On the Computational Complexity of Private High-dimensional Model Selection [18.964255744068122]
We consider the problem of model selection in a high-dimensional sparse linear regression model under privacy constraints.
We propose an efficient Metropolis-Hastings algorithm and under certain regularity conditions, we establish that it enjoys mixing time to its stationary distribution.
arXiv Detail & Related papers (2023-10-11T19:53:15Z) - Training generative models from privatized data [9.584000954415476]
Local differential privacy is a powerful method for privacy-preserving data collection.
We develop a framework for training Generative Adversarial Networks (GANs) on differentially privatized data.
arXiv Detail & Related papers (2023-06-15T23:28:45Z) - Regression with Label Differential Privacy [64.21020761920322]
We derive a label DP randomization mechanism that is optimal under a given regression loss function.
We prove that the optimal mechanism takes the form of a "randomized response on bins"
arXiv Detail & Related papers (2022-12-12T17:41:32Z) - A Survey on Preserving Fairness Guarantees in Changing Environments [4.926395463398194]
The literature of algorithmic fairness has grown considerably over the last decade.
In practice, dissimilarity between the training and deployment environments exists.
There is an emergent research line that studies how to preserve fairness guarantees.
arXiv Detail & Related papers (2022-11-14T17:02:19Z) - Local Graph-homomorphic Processing for Privatized Distributed Systems [57.14673504239551]
We show that the added noise does not affect the performance of the learned model.
This is a significant improvement to previous works on differential privacy for distributed algorithms.
arXiv Detail & Related papers (2022-10-26T10:00:14Z) - Learning Calibrated Uncertainties for Domain Shift: A Distributionally
Robust Learning Approach [150.8920602230832]
We propose a framework for learning calibrated uncertainties under domain shifts.
In particular, the density ratio estimation reflects the closeness of a target (test) sample to the source (training) distribution.
We show that our proposed method generates calibrated uncertainties that benefit downstream tasks.
arXiv Detail & Related papers (2020-10-08T02:10:54Z) - Stochastic-Sign SGD for Federated Learning with Theoretical Guarantees [49.91477656517431]
Quantization-based solvers have been widely adopted in Federated Learning (FL)
No existing methods enjoy all the aforementioned properties.
We propose an intuitively-simple yet theoretically-simple method based on SIGNSGD to bridge the gap.
arXiv Detail & Related papers (2020-02-25T15:12:15Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.