Privacy Amplification via Shuffled Check-Ins
- URL: http://arxiv.org/abs/2206.03151v2
- Date: Tue, 4 Jul 2023 06:28:47 GMT
- Title: Privacy Amplification via Shuffled Check-Ins
- Authors: Seng Pei Liew, Satoshi Hasegawa, Tsubasa Takahashi
- Abstract summary: We study a protocol for distributed computation called shuffled check-in.
It achieves strong privacy guarantees without requiring any further trust assumptions beyond a trusted shuffler.
We show that shuffled check-in achieves tight privacy guarantees through privacy amplification.
- Score: 2.3333090554192615
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: We study a protocol for distributed computation called shuffled check-in,
which achieves strong privacy guarantees without requiring any further trust
assumptions beyond a trusted shuffler. Unlike most existing work, shuffled
check-in allows clients to make independent and random decisions to participate
in the computation, removing the need for server-initiated subsampling.
Leveraging differential privacy, we show that shuffled check-in achieves tight
privacy guarantees through privacy amplification, with a novel analysis based
on R{\'e}nyi differential privacy that improves privacy accounting over
existing work. We also introduce a numerical approach to track the privacy of
generic shuffling mechanisms, including Gaussian mechanism, which is the first
evaluation of a generic mechanism under the distributed setting within the
local/shuffle model in the literature. Empirical studies are also given to
demonstrate the efficacy of the proposed approach.
Related papers
- Auditing $f$-Differential Privacy in One Run [43.34594422920125]
Empirical auditing has emerged as a means of catching some of the flaws in the implementation of privacy-preserving algorithms.
We present a tight and efficient auditing procedure and analysis that can effectively assess the privacy of mechanisms.
arXiv Detail & Related papers (2024-10-29T17:02:22Z) - Enhanced Privacy Bound for Shuffle Model with Personalized Privacy [32.08637708405314]
Differential Privacy (DP) is an enhanced privacy protocol which introduces an intermediate trusted server between local users and a central data curator.
It significantly amplifies the central DP guarantee by anonymizing and shuffling the local randomized data.
This work focuses on deriving the central privacy bound for a more practical setting where personalized local privacy is required by each user.
arXiv Detail & Related papers (2024-07-25T16:11:56Z) - Unified Mechanism-Specific Amplification by Subsampling and Group Privacy Amplification [54.1447806347273]
Amplification by subsampling is one of the main primitives in machine learning with differential privacy.
We propose the first general framework for deriving mechanism-specific guarantees.
We analyze how subsampling affects the privacy of groups of multiple users.
arXiv Detail & Related papers (2024-03-07T19:36:05Z) - A Randomized Approach for Tight Privacy Accounting [63.67296945525791]
We propose a new differential privacy paradigm called estimate-verify-release (EVR)
EVR paradigm first estimates the privacy parameter of a mechanism, then verifies whether it meets this guarantee, and finally releases the query output.
Our empirical evaluation shows the newly proposed EVR paradigm improves the utility-privacy tradeoff for privacy-preserving machine learning.
arXiv Detail & Related papers (2023-04-17T00:38:01Z) - Breaking the Communication-Privacy-Accuracy Tradeoff with
$f$-Differential Privacy [51.11280118806893]
We consider a federated data analytics problem in which a server coordinates the collaborative data analysis of multiple users with privacy concerns and limited communication capability.
We study the local differential privacy guarantees of discrete-valued mechanisms with finite output space through the lens of $f$-differential privacy (DP)
More specifically, we advance the existing literature by deriving tight $f$-DP guarantees for a variety of discrete-valued mechanisms.
arXiv Detail & Related papers (2023-02-19T16:58:53Z) - Tight Auditing of Differentially Private Machine Learning [77.38590306275877]
For private machine learning, existing auditing mechanisms are tight.
They only give tight estimates under implausible worst-case assumptions.
We design an improved auditing scheme that yields tight privacy estimates for natural (not adversarially crafted) datasets.
arXiv Detail & Related papers (2023-02-15T21:40:33Z) - Debugging Differential Privacy: A Case Study for Privacy Auditing [60.87570714269048]
We show that auditing can also be used to find flaws in (purportedly) differentially private schemes.
In this case study, we audit a recent open source implementation of a differentially private deep learning algorithm and find, with 99.99999999% confidence, that the implementation does not satisfy the claimed differential privacy guarantee.
arXiv Detail & Related papers (2022-02-24T17:31:08Z) - Uniformity Testing in the Shuffle Model: Simpler, Better, Faster [0.0]
Uniformity testing, or testing whether independent observations are uniformly distributed, is the question in distribution testing.
In this work, we considerably simplify the analysis of the known uniformity testing algorithm in the shuffle model.
arXiv Detail & Related papers (2021-08-20T03:43:12Z) - Renyi Differential Privacy of the Subsampled Shuffle Model in
Distributed Learning [7.197592390105457]
We study privacy in a distributed learning framework, where clients collaboratively build a learning model iteratively through interactions with a server from whom we need privacy.
Motivated by optimization and the federated learning (FL) paradigm, we focus on the case where a small fraction of data samples are randomly sub-sampled in each round.
To obtain even stronger local privacy guarantees, we study this in the shuffle privacy model, where each client randomizes its response using a local differentially private (LDP) mechanism.
arXiv Detail & Related papers (2021-07-19T11:43:24Z) - The Distributed Discrete Gaussian Mechanism for Federated Learning with
Secure Aggregation [28.75998313625891]
We present a comprehensive end-to-end system, which appropriately discretizes the data and adds discrete Gaussian noise before performing secure aggregation.
Our theoretical guarantees highlight the complex tension between communication, privacy, and accuracy.
arXiv Detail & Related papers (2021-02-12T08:20:18Z) - Private Prediction Sets [72.75711776601973]
Machine learning systems need reliable uncertainty quantification and protection of individuals' privacy.
We present a framework that treats these two desiderata jointly.
We evaluate the method on large-scale computer vision datasets.
arXiv Detail & Related papers (2021-02-11T18:59:11Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.