Information Theoretic Secure Aggregation with User Dropouts
- URL: http://arxiv.org/abs/2101.07750v1
- Date: Tue, 19 Jan 2021 17:43:48 GMT
- Title: Information Theoretic Secure Aggregation with User Dropouts
- Authors: Yizhou Zhao, Hua Sun
- Abstract summary: A server wishes to learn and only learn the sum of the inputs of a number of users while some users may drop out (i.e., may not respond)
We consider the following minimal two-round model of secure aggregation.
- Score: 56.39267027829569
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In the robust secure aggregation problem, a server wishes to learn and only
learn the sum of the inputs of a number of users while some users may drop out
(i.e., may not respond). The identity of the dropped users is not known a
priori and the server needs to securely recover the sum of the remaining
surviving users. We consider the following minimal two-round model of secure
aggregation. Over the first round, any set of no fewer than $U$ users out of
$K$ users respond to the server and the server wants to learn the sum of the
inputs of all responding users. The remaining users are viewed as dropped. Over
the second round, any set of no fewer than $U$ users of the surviving users
respond (i.e., dropouts are still possible over the second round) and from the
information obtained from the surviving users over the two rounds, the server
can decode the desired sum. The security constraint is that even if the server
colludes with any $T$ users and the messages from the dropped users are
received by the server (e.g., delayed packets), the server is not able to infer
any additional information beyond the sum in the information theoretic sense.
For this information theoretic secure aggregation problem, we characterize the
optimal communication cost. When $U \leq T$, secure aggregation is not
feasible, and when $U > T$, to securely compute one symbol of the sum, the
minimum number of symbols sent from each user to the server is $1$ over the
first round, and $1/(U-T)$ over the second round.
Related papers
- $\mathsf{OPA}$: One-shot Private Aggregation with Single Client Interaction and its Applications to Federated Learning [6.977111770337479]
We introduce One-shot Private Aggregation ($mathsfOPA$) where clients speak only once (or even choose not to speak) per aggregation evaluation.
Since each client communicates only once per aggregation, this simplifies managing dropouts and dynamic participation.
$mathsfOPA$ is practical, outperforming state-of-the-art solutions.
arXiv Detail & Related papers (2024-10-29T17:50:11Z) - Private Vector Mean Estimation in the Shuffle Model: Optimal Rates Require Many Messages [63.366380571397]
We study the problem of private vector mean estimation in the shuffle model of privacy where $n$ users each have a unit vector $v(i) inmathbbRd$.
We propose a new multi-message protocol that achieves the optimal error using $tildemathcalOleft(min(nvarepsilon2,d)right)$ messages per user.
arXiv Detail & Related papers (2024-04-16T00:56:36Z) - Continual Mean Estimation Under User-Level Privacy [21.513703308495774]
We consider the problem of continually releasing an estimate of the population mean of a stream of samples that is user-level differentially private (DP)
We provide an algorithm that outputs a mean estimate at every time instant $t$ such that the overall release is user-level $varepsilon$-DP.
arXiv Detail & Related papers (2022-12-20T03:44:25Z) - Modeling Attrition in Recommender Systems with Departing Bandits [84.85560764274399]
We propose a novel multi-armed bandit setup that captures policy-dependent horizons.
We first address the case where all users share the same type, demonstrating that a recent UCB-based algorithm is optimal.
We then move forward to the more challenging case, where users are divided among two types.
arXiv Detail & Related papers (2022-03-25T02:30:54Z) - SwiftAgg+: Achieving Asymptotically Optimal Communication Load in Secure
Aggregation for Federated Learning [83.94234859890402]
SwiftAgg+ is a novel secure aggregation protocol for federated learning systems.
A central server aggregates local models of $NinmathbbN$ distributed users, each of size $L in mathbbN$, trained on their local data, in a privacy-preserving manner.
arXiv Detail & Related papers (2022-03-24T13:12:23Z) - SwiftAgg: Communication-Efficient and Dropout-Resistant Secure
Aggregation for Federated Learning with Worst-Case Security Guarantees [83.94234859890402]
We propose SwiftAgg, a novel secure aggregation protocol for federated learning systems.
A central server aggregates local models of $N$ distributed users, each of size $L$, trained on their local data.
SwiftAgg significantly reduces the communication overheads without any compromise on security.
arXiv Detail & Related papers (2022-02-08T22:08:56Z) - Coordinated Attacks against Contextual Bandits: Fundamental Limits and
Defense Mechanisms [75.17357040707347]
Motivated by online recommendation systems, we propose the problem of finding the optimal policy in contextual bandits.
The goal is to robustly learn the policy that maximizes rewards for good users with as few user interactions as possible.
We show we can achieve an $tildeO(min(S,A)cdot alpha/epsilon2)$ upper-bound, by employing efficient robust mean estimators.
arXiv Detail & Related papers (2022-01-30T01:45:13Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.