Differentially Private Secure Multiplication: Hiding Information in the
Rubble of Noise
- URL: http://arxiv.org/abs/2309.16105v1
- Date: Thu, 28 Sep 2023 02:13:13 GMT
- Title: Differentially Private Secure Multiplication: Hiding Information in the
Rubble of Noise
- Authors: Viveck R. Cadambe, Ateet Devulapalli, Haewon Jeong, Flavio P. Calmon
- Abstract summary: We consider the problem of private distributed multi-party multiplication.
It is well-established that Shamir secret-sharing coding strategies can enable perfect information-theoretic privacy in distributed computation.
- Score: 7.767656876470667
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: We consider the problem of private distributed multi-party multiplication. It
is well-established that Shamir secret-sharing coding strategies can enable
perfect information-theoretic privacy in distributed computation via the
celebrated algorithm of Ben Or, Goldwasser and Wigderson (the "BGW algorithm").
However, perfect privacy and accuracy require an honest majority, that is, $N
\geq 2t+1$ compute nodes are required to ensure privacy against any $t$
colluding adversarial nodes. By allowing for some controlled amount of
information leakage and approximate multiplication instead of exact
multiplication, we study coding schemes for the setting where the number of
honest nodes can be a minority, that is $N< 2t+1.$ We develop a tight
characterization privacy-accuracy trade-off for cases where $N < 2t+1$ by
measuring information leakage using {differential} privacy instead of perfect
privacy, and using the mean squared error metric for accuracy. A novel
technical aspect is an intricately layered noise distribution that merges ideas
from differential privacy and Shamir secret-sharing at different layers.
Related papers
- Differential Privacy on Trust Graphs [54.55190841518906]
We study differential privacy (DP) in a multi-party setting where each party only trusts a (known) subset of the other parties with its data.
We give a DP algorithm for aggregation with a much better privacy-utility trade-off than in the well-studied local model of DP.
arXiv Detail & Related papers (2024-10-15T20:31:04Z) - Differentially Private Decentralized Learning with Random Walks [15.862152253607496]
We characterize the privacy guarantees of decentralized learning with random walk algorithms, where a model is updated by traveling from one node to another along the edges of a communication graph.
Our results reveal that random walk algorithms tends to yield better privacy guarantees than gossip algorithms for nodes close from each other.
arXiv Detail & Related papers (2024-02-12T08:16:58Z) - Preserving Node-level Privacy in Graph Neural Networks [8.823710998526705]
We propose a solution that addresses the issue of node-level privacy in Graph Neural Networks (GNNs)
Our protocol consists of two main components: 1) a sampling routine called HeterPoisson, which employs a specialized node sampling strategy and a series of tailored operations to generate a batch of sub-graphs with desired properties, and 2) a randomization routine that utilizes symmetric Laplace noise instead of the commonly used Gaussian noise.
Our protocol enables GNN learning with good performance, as demonstrated by experiments on five real-world datasets.
arXiv Detail & Related papers (2023-11-12T16:21:29Z) - Blink: Link Local Differential Privacy in Graph Neural Networks via
Bayesian Estimation [79.64626707978418]
We propose using link local differential privacy over decentralized nodes to train graph neural networks.
Our approach spends the privacy budget separately on links and degrees of the graph for the server to better denoise the graph topology.
Our approach outperforms existing methods in terms of accuracy under varying privacy budgets.
arXiv Detail & Related papers (2023-09-06T17:53:31Z) - Smooth Anonymity for Sparse Graphs [69.1048938123063]
differential privacy has emerged as the gold standard of privacy, however, when it comes to sharing sparse datasets.
In this work, we consider a variation of $k$-anonymity, which we call smooth-$k$-anonymity, and design simple large-scale algorithms that efficiently provide smooth-$k$-anonymity.
arXiv Detail & Related papers (2022-07-13T17:09:25Z) - Muffliato: Peer-to-Peer Privacy Amplification for Decentralized Optimization and Averaging [20.39986955578245]
We introduce pairwise network differential privacy, a relaxation of Local Differential Privacy (LDP)
We derive a differentially private decentralized optimization algorithm that alternates between local gradient descent steps and gossip averaging.
Our results show that our algorithms amplify privacy guarantees as a function of the distance between nodes in the graph.
arXiv Detail & Related papers (2022-06-10T13:32:35Z) - Individual Privacy Accounting for Differentially Private Stochastic Gradient Descent [69.14164921515949]
We characterize privacy guarantees for individual examples when releasing models trained by DP-SGD.
We find that most examples enjoy stronger privacy guarantees than the worst-case bound.
This implies groups that are underserved in terms of model utility simultaneously experience weaker privacy guarantees.
arXiv Detail & Related papers (2022-06-06T13:49:37Z) - Learning with User-Level Privacy [61.62978104304273]
We analyze algorithms to solve a range of learning tasks under user-level differential privacy constraints.
Rather than guaranteeing only the privacy of individual samples, user-level DP protects a user's entire contribution.
We derive an algorithm that privately answers a sequence of $K$ adaptively chosen queries with privacy cost proportional to $tau$, and apply it to solve the learning tasks we consider.
arXiv Detail & Related papers (2021-02-23T18:25:13Z) - Robust and Differentially Private Mean Estimation [40.323756738056616]
Differential privacy has emerged as a standard requirement in a variety of applications ranging from the U.S. Census to data collected in commercial devices.
An increasing number of such databases consist of data from multiple sources, not all of which can be trusted.
This leaves existing private analyses vulnerable to attacks by an adversary who injects corrupted data.
arXiv Detail & Related papers (2021-02-18T05:02:49Z) - BUDS: Balancing Utility and Differential Privacy by Shuffling [3.618133010429131]
Balancing utility and differential privacy by shuffling or textitBUDS is an approach towards crowd-sourced, statistical databases.
New algorithm is proposed using one-hot encoding and iterative shuffling with the loss estimation and risk minimization techniques.
During empirical test of balanced utility and privacy, BUDS produces $epsilon = 0.02$ which is a very promising result.
arXiv Detail & Related papers (2020-06-07T11:39:13Z) - InfoScrub: Towards Attribute Privacy by Targeted Obfuscation [77.49428268918703]
We study techniques that allow individuals to limit the private information leaked in visual data.
We tackle this problem in a novel image obfuscation framework.
We find our approach generates obfuscated images faithful to the original input images, and additionally increase uncertainty by 6.2$times$ (or up to 0.85 bits) over the non-obfuscated counterparts.
arXiv Detail & Related papers (2020-05-20T19:48:04Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.