Differential Secrecy for Distributed Data and Applications to Robust
Differentially Secure Vector Summation
- URL: http://arxiv.org/abs/2202.10618v1
- Date: Tue, 22 Feb 2022 02:06:42 GMT
- Title: Differential Secrecy for Distributed Data and Applications to Robust
Differentially Secure Vector Summation
- Authors: Kunal Talwar
- Abstract summary: We present a protocol for vector summation that verifies that the Euclidean norm of each contribution is approximately bounded.
Unlike SMC algorithms that inevitably cast integers to elements of a large finite field, our algorithms work over integers/reals, which may allow for additional efficiencies.
- Score: 32.004283989604154
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Computing the noisy sum of real-valued vectors is an important primitive in
differentially private learning and statistics. In private federated learning
applications, these vectors are held by client devices, leading to a
distributed summation problem. Standard Secure Multiparty Computation (SMC)
protocols for this problem are susceptible to poisoning attacks, where a client
may have a large influence on the sum, without being detected.
In this work, we propose a poisoning-robust private summation protocol in the
multiple-server setting, recently studied in PRIO. We present a protocol for
vector summation that verifies that the Euclidean norm of each contribution is
approximately bounded. We show that by relaxing the security constraint in SMC
to a differential privacy like guarantee, one can improve over PRIO in terms of
communication requirements as well as the client-side computation. Unlike SMC
algorithms that inevitably cast integers to elements of a large finite field,
our algorithms work over integers/reals, which may allow for additional
efficiencies.
Related papers
- Beyond Statistical Estimation: Differentially Private Individual Computation via Shuffling [21.031062710893867]
This paper introduces a novel paradigm termed Private Individual Computation (PIC)
PIC enables personalized outputs while preserving privacy, and enjoys privacy amplification through shuffling.
We present an optimal randomizer, the Minkowski Response, designed for the PIC model to enhance utility.
arXiv Detail & Related papers (2024-06-26T07:53:48Z) - Provable Privacy with Non-Private Pre-Processing [56.770023668379615]
We propose a general framework to evaluate the additional privacy cost incurred by non-private data-dependent pre-processing algorithms.
Our framework establishes upper bounds on the overall privacy guarantees by utilising two new technical notions.
arXiv Detail & Related papers (2024-03-19T17:54:49Z) - PINE: Efficient Norm-Bound Verification for Secret-Shared Vectors [25.30406294459483]
A two-server system such as PRIO allows for scalable aggregation of secret-shared vectors.
Existing protocols for ensuring bounded-norm contributions either incur a large communication overhead, or only allow for approximate verification of the norm bound.
We propose Private Inexpensive Norm Enforcement (PINE), a new protocol that allows exact norm verification with little communication overhead.
arXiv Detail & Related papers (2023-11-16T23:54:21Z) - Samplable Anonymous Aggregation for Private Federated Data Analysis [25.35309084903802]
Locally differentially private algorithms require little trust but are (provably) limited in their utility.
Centrally differentially private algorithms can allow significantly better utility but require a trusted curator.
Our first contribution is to propose a new primitive that allows for efficient implementation of several commonly used algorithms.
arXiv Detail & Related papers (2023-07-27T17:19:37Z) - Private Federated Frequency Estimation: Adapting to the Hardness of the
Instance [40.518740805553634]
In federated frequency estimation (FFE), multiple clients work together to estimate the frequencies of their collective data.
We show that under the more practical multi-round FEE setting, simple adaptations of count sketching are strictly sub-optimal.
We propose a novel hybrid sketching algorithm that is provably more accurate.
arXiv Detail & Related papers (2023-06-15T17:30:03Z) - When approximate design for fast homomorphic computation provides
differential privacy guarantees [0.08399688944263842]
Differential privacy (DP) and cryptographic primitives are popular countermeasures against privacy attacks.
In this paper, we design SHIELD, a probabilistic approximation algorithm for the argmax operator.
Even if SHIELD could have other applications, we here focus on one setting and seamlessly integrate it in the SPEED collaborative training framework.
arXiv Detail & Related papers (2023-04-06T09:38:01Z) - ByzSecAgg: A Byzantine-Resistant Secure Aggregation Scheme for Federated
Learning Based on Coded Computing and Vector Commitment [90.60126724503662]
ByzSecAgg is an efficient secure aggregation scheme for federated learning.
ByzSecAgg is protected against Byzantine attacks and privacy leakages.
arXiv Detail & Related papers (2023-02-20T11:15:18Z) - On Differential Privacy for Federated Learning in Wireless Systems with
Multiple Base Stations [90.53293906751747]
We consider a federated learning model in a wireless system with multiple base stations and inter-cell interference.
We show the convergence behavior of the learning process by deriving an upper bound on its optimality gap.
Our proposed scheduler improves the average accuracy of the predictions compared with a random scheduler.
arXiv Detail & Related papers (2022-08-25T03:37:11Z) - Is Vertical Logistic Regression Privacy-Preserving? A Comprehensive
Privacy Analysis and Beyond [57.10914865054868]
We consider vertical logistic regression (VLR) trained with mini-batch descent gradient.
We provide a comprehensive and rigorous privacy analysis of VLR in a class of open-source Federated Learning frameworks.
arXiv Detail & Related papers (2022-07-19T05:47:30Z) - Learning, compression, and leakage: Minimising classification error via
meta-universal compression principles [87.054014983402]
A promising group of compression techniques for learning scenarios is normalised maximum likelihood (NML) coding.
Here we consider a NML-based decision strategy for supervised classification problems, and show that it attains PAC learning when applied to a wide variety of models.
We show that the misclassification rate of our method is upper bounded by the maximal leakage, a recently proposed metric to quantify the potential of data leakage in privacy-sensitive scenarios.
arXiv Detail & Related papers (2020-10-14T20:03:58Z) - Differentially Private Clustering: Tight Approximation Ratios [57.89473217052714]
We give efficient differentially private algorithms for basic clustering problems.
Our results imply an improved algorithm for the Sample and Aggregate privacy framework.
One of the tools used in our 1-Cluster algorithm can be employed to get a faster quantum algorithm for ClosestPair in a moderate number of dimensions.
arXiv Detail & Related papers (2020-08-18T16:22:06Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.