Cactus Mechanisms: Optimal Differential Privacy Mechanisms in the
Large-Composition Regime
- URL: http://arxiv.org/abs/2207.00420v1
- Date: Sat, 25 Jun 2022 20:05:50 GMT
- Title: Cactus Mechanisms: Optimal Differential Privacy Mechanisms in the
Large-Composition Regime
- Authors: Wael Alghamdi, Shahab Asoodeh, Flavio P. Calmon, Oliver Kosut, Lalitha
Sankar, Fei Wei
- Abstract summary: We study the design of optimal differential privacy mechanisms in the limit of a large number of compositions.
In this regime the best privacy mechanism is the one that minimizes the Kullback-Leibler divergence.
We show that our quantization approach can be arbitrarily close to an optimal mechanism.
- Score: 12.420941209631742
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Most differential privacy mechanisms are applied (i.e., composed) numerous
times on sensitive data. We study the design of optimal differential privacy
mechanisms in the limit of a large number of compositions. As a consequence of
the law of large numbers, in this regime the best privacy mechanism is the one
that minimizes the Kullback-Leibler divergence between the conditional output
distributions of the mechanism given two different inputs. We formulate an
optimization problem to minimize this divergence subject to a cost constraint
on the noise. We first prove that additive mechanisms are optimal. Since the
optimization problem is infinite dimensional, it cannot be solved directly;
nevertheless, we quantize the problem to derive near-optimal additive
mechanisms that we call "cactus mechanisms" due to their shape. We show that
our quantization approach can be arbitrarily close to an optimal mechanism.
Surprisingly, for quadratic cost, the Gaussian mechanism is strictly
sub-optimal compared to this cactus mechanism. Finally, we provide numerical
results which indicate that cactus mechanism outperforms the Gaussian mechanism
for a finite number of compositions.
Related papers
- Privacy-Aware Randomized Quantization via Linear Programming [13.002534825666219]
We propose a family of quantization mechanisms that is unbiased and differentially private.
Our proposed mechanism can attain a better privacy-accuracy trade-off compared to baselines.
arXiv Detail & Related papers (2024-06-01T18:40:08Z) - Near-Universally-Optimal Differentially Private Minimum Spanning Trees [0.0]
We prove that a simple differentially private mechanism for approximately releasing the minimum spanning tree is near-optimal in the sense of universal optimality for the $ell_infty$ neighbor relation.
We show that one may implement the exponential mechanism for MST in time, and that this results in universal near-optimality for both the $ell_infty$ and the $ell_infty$ neighbor relations.
arXiv Detail & Related papers (2024-04-23T13:39:25Z) - Refined Mechanism Design for Approximately Structured Priors via Active
Regression [50.71772232237571]
We consider the problem of a revenue-maximizing seller with a large number of items for sale to $n$ strategic bidders.
It is well-known that optimal and even approximately-optimal mechanisms for this setting are notoriously difficult to characterize or compute.
arXiv Detail & Related papers (2023-10-11T20:34:17Z) - Truncated Laplace and Gaussian mechanisms of RDP [28.227024132603123]
The Laplace mechanism and the Gaussian mechanism are primary mechanisms in differential privacy.
Due to the infinite-range random variables they generate, the Laplace and Gaussian mechanisms may return values that are semantically impossible, such as negative numbers.
arXiv Detail & Related papers (2023-09-22T06:37:45Z) - Less is More: Revisiting the Gaussian Mechanism for Differential Privacy [8.89234867625102]
Differential privacy via output perturbation has been a de facto standard for releasing query or computation results on sensitive data.
We identify that all existing Gaussian mechanisms suffer from the curse of full-rank covariance matrices.
arXiv Detail & Related papers (2023-06-04T04:14:38Z) - On User-Level Private Convex Optimization [59.75368670035683]
We introduce a new mechanism for convex optimization (SCO) with user-level differential privacy guarantees.
Our mechanism does not require any smoothness assumptions on the loss.
Our bounds are the first where the minimum number of users needed for user-level privacy has no dependence on the dimension.
arXiv Detail & Related papers (2023-05-08T17:47:28Z) - Pessimism meets VCG: Learning Dynamic Mechanism Design via Offline
Reinforcement Learning [114.36124979578896]
We design a dynamic mechanism using offline reinforcement learning algorithms.
Our algorithm is based on the pessimism principle and only requires a mild assumption on the coverage of the offline data set.
arXiv Detail & Related papers (2022-05-05T05:44:26Z) - On Scheduling Mechanisms Beyond the Worst Case [17.281501828240877]
We find that mechanism K achieves a smaller social cost than mechanism P on every input.
We also find that the average-case approximation ratio of mechanism P converges to the same constant.
arXiv Detail & Related papers (2022-04-14T20:57:50Z) - Constrained mixers for the quantum approximate optimization algorithm [55.41644538483948]
We present a framework for constructing mixing operators that restrict the evolution to a subspace of the full Hilbert space.
We generalize the "XY"-mixer designed to preserve the subspace of "one-hot" states to the general case of subspaces given by a number of computational basis states.
Our analysis also leads to valid Trotterizations for "XY"-mixer with fewer CX gates than is known to date.
arXiv Detail & Related papers (2022-03-11T17:19:26Z) - Understanding Interlocking Dynamics of Cooperative Rationalization [90.6863969334526]
Selective rationalization explains the prediction of complex neural networks by finding a small subset of the input that is sufficient to predict the neural model output.
We reveal a major problem with such cooperative rationalization paradigm -- model interlocking.
We propose a new rationalization framework, called A2R, which introduces a third component into the architecture, a predictor driven by soft attention as opposed to selection.
arXiv Detail & Related papers (2021-10-26T17:39:18Z) - From Checking to Inference: Actual Causality Computations as
Optimization Problems [79.87179017975235]
We present a novel approach to formulate different notions of causal reasoning, over binary acyclic models, as optimization problems.
We show that both notions are efficiently automated. Using models with more than $8000$ variables, checking is computed in a matter of seconds, with MaxSAT outperforming ILP in many cases.
arXiv Detail & Related papers (2020-06-05T10:56:52Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.