PREAMBLE: Private and Efficient Aggregation of Block Sparse Vectors and Applications
- URL: http://arxiv.org/abs/2503.11897v1
- Date: Fri, 14 Mar 2025 21:58:15 GMT
- Title: PREAMBLE: Private and Efficient Aggregation of Block Sparse Vectors and Applications
- Authors: Hilal Asi, Vitaly Feldman, Hannah Keller, Guy N. Rothblum, Kunal Talwar,
- Abstract summary: We revisit the problem of secure aggregation of high-dimensional vectors in a two-server system such as Prio.<n> PREAMBLE is a novel extension of distributed point functions that enables communication- and computation-efficient aggregation.
- Score: 42.968231105076335
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We revisit the problem of secure aggregation of high-dimensional vectors in a two-server system such as Prio. These systems are typically used to aggregate vectors such as gradients in private federated learning, where the aggregate itself is protected via noise addition to ensure differential privacy. Existing approaches require communication scaling with the dimensionality, and thus limit the dimensionality of vectors one can efficiently process in this setup. We propose PREAMBLE: Private Efficient Aggregation Mechanism for BLock-sparse Euclidean Vectors. PREAMBLE is a novel extension of distributed point functions that enables communication- and computation-efficient aggregation of block-sparse vectors, which are sparse vectors where the non-zero entries occur in a small number of clusters of consecutive coordinates. We then show that PREAMBLE can be combined with random sampling and privacy amplification by sampling results, to allow asymptotically optimal privacy-utility trade-offs for vector aggregation, at a fraction of the communication cost. When coupled with recent advances in numerical privacy accounting, our approach incurs a negligible overhead in noise variance, compared to the Gaussian mechanism used with Prio.
Related papers
- Scalable Private Partition Selection via Adaptive Weighting [66.09199304818928]
In a private set union, users hold subsets of items from an unbounded universe.<n>The goal is to output as many items as possible from the union of the users' sets while maintaining user-level differential privacy.<n>We propose an algorithm for this problem, MaximumDegree (MAD), which adaptively reroutes weight from items with weight far above the threshold needed for privacy to items with smaller weight.
arXiv Detail & Related papers (2025-02-13T01:27:11Z) - PINE: Efficient Norm-Bound Verification for Secret-Shared Vectors [25.30406294459483]
A two-server system such as PRIO allows for scalable aggregation of secret-shared vectors.
Existing protocols for ensuring bounded-norm contributions either incur a large communication overhead, or only allow for approximate verification of the norm bound.
We propose Private Inexpensive Norm Enforcement (PINE), a new protocol that allows exact norm verification with little communication overhead.
arXiv Detail & Related papers (2023-11-16T23:54:21Z) - ByzSecAgg: A Byzantine-Resistant Secure Aggregation Scheme for Federated
Learning Based on Coded Computing and Vector Commitment [90.60126724503662]
ByzSecAgg is an efficient secure aggregation scheme for federated learning.
ByzSecAgg is protected against Byzantine attacks and privacy leakages.
arXiv Detail & Related papers (2023-02-20T11:15:18Z) - Over-the-Air Federated Learning with Privacy Protection via Correlated
Additive Perturbations [57.20885629270732]
We consider privacy aspects of wireless federated learning with Over-the-Air (OtA) transmission of gradient updates from multiple users/agents to an edge server.
Traditional perturbation-based methods provide privacy protection while sacrificing the training accuracy.
In this work, we aim at minimizing privacy leakage to the adversary and the degradation of model accuracy at the edge server.
arXiv Detail & Related papers (2022-10-05T13:13:35Z) - Task-Oriented Sensing, Computation, and Communication Integration for
Multi-Device Edge AI [108.08079323459822]
This paper studies a new multi-intelligent edge artificial-latency (AI) system, which jointly exploits the AI model split inference and integrated sensing and communication (ISAC)
We measure the inference accuracy by adopting an approximate but tractable metric, namely discriminant gain.
arXiv Detail & Related papers (2022-07-03T06:57:07Z) - Differential Secrecy for Distributed Data and Applications to Robust
Differentially Secure Vector Summation [32.004283989604154]
We present a protocol for vector summation that verifies that the Euclidean norm of each contribution is approximately bounded.
Unlike SMC algorithms that inevitably cast integers to elements of a large finite field, our algorithms work over integers/reals, which may allow for additional efficiencies.
arXiv Detail & Related papers (2022-02-22T02:06:42Z) - Shuffle Private Stochastic Convex Optimization [20.379311972125034]
In shuffle privacy, each user sends a collection of randomized messages to a trusted shuffler.
The shuffler randomly permutes these messages, and the resulting shuffled collection of messages must satisfy differential privacy.
We present interactive shuffle protocols for convex optimization.
arXiv Detail & Related papers (2021-06-17T20:44:00Z) - Composably secure data processing for Gaussian-modulated continuous
variable quantum key distribution [58.720142291102135]
Continuous-variable quantum key distribution (QKD) employs the quadratures of a bosonic mode to establish a secret key between two remote parties.
We consider a protocol with homodyne detection in the general setting of composable finite-size security.
In particular, we analyze the high signal-to-noise regime which requires the use of high-rate (non-binary) low-density parity check codes.
arXiv Detail & Related papers (2021-03-30T18:02:55Z) - On InstaHide, Phase Retrieval, and Sparse Matrix Factorization [32.93116534310763]
InstaHide is a scheme for preserving the security of private datasets in the context of distributed learning.
A salient question is whether this scheme is secure in any provable sense.
We show that the answer appears to be quite subtle and closely related to the average-case complexity of a new multi-task, missing-data version of the classic problem of phase retrieval.
arXiv Detail & Related papers (2020-11-23T02:47:08Z) - Privacy Amplification via Random Check-Ins [38.72327434015975]
Differentially Private Gradient Descent (DP-SGD) forms a fundamental building block in many applications for learning over sensitive data.
In this paper, we focus on conducting iterative methods like DP-SGD in the setting of federated learning (FL) wherein the data is distributed among many devices (clients)
Our main contribution is the emphrandom check-in distributed protocol, which crucially relies only on randomized participation decisions made locally and independently by each client.
arXiv Detail & Related papers (2020-07-13T18:14:09Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.