Adversarially Robust Distributed Count Tracking via Partial Differential
Privacy
- URL: http://arxiv.org/abs/2311.00346v1
- Date: Wed, 1 Nov 2023 07:42:13 GMT
- Title: Adversarially Robust Distributed Count Tracking via Partial Differential
Privacy
- Authors: Zhongzheng Xiong, Xiaoyi Zhu, Zengfeng Huang
- Abstract summary: We study the distributed tracking model, also known as distributed functional monitoring.
This model involves $k$ sites each receiving a stream of items and communicating with the central server.
For count tracking, it is known that there is a $sqrtk$ gap in communication between deterministic and randomized algorithms.
- Score: 17.43748116766233
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We study the distributed tracking model, also known as distributed functional
monitoring. This model involves $k$ sites each receiving a stream of items and
communicating with the central server. The server's task is to track a function
of all items received thus far continuously, with minimum communication cost.
For count tracking, it is known that there is a $\sqrt{k}$ gap in communication
between deterministic and randomized algorithms. However, existing randomized
algorithms assume an "oblivious adversary" who constructs the entire input
streams before the algorithm starts. Here we consider adaptive adversaries who
can choose new items based on previous answers from the algorithm.
Deterministic algorithms are trivially robust to adaptive adversaries, while
randomized ones may not. Therefore, we investigate whether the $\sqrt{k}$
advantage of randomized algorithms is from randomness itself or the oblivious
adversary assumption. We provide an affirmative answer to this question by
giving a robust algorithm with optimal communication. Existing robustification
techniques do not yield optimal bounds due to the inherent challenges of the
distributed nature of the problem. To address this, we extend the differential
privacy framework by introducing "partial differential privacy" and proving a
new generalization theorem. This theorem may have broader applications beyond
robust count tracking, making it of independent interest.
Related papers
- Perturb-and-Project: Differentially Private Similarities and Marginals [73.98880839337873]
We revisit the input perturbations framework for differential privacy where noise is added to the input $Ain mathcalS$.
We first design novel efficient algorithms to privately release pair-wise cosine similarities.
We derive a novel algorithm to compute $k$-way marginal queries over $n$ features.
arXiv Detail & Related papers (2024-06-07T12:07:16Z) - Replicability is Asymptotically Free in Multi-armed Bandits [45.729105054410745]
This work is motivated by the growing demand for reproducible machine learning.
In particular, we consider a replicable algorithm that ensures, with high probability, that the algorithm's sequence of actions is not affected by the randomness inherent in the dataset.
arXiv Detail & Related papers (2024-02-12T03:31:34Z) - Decentralized Multi-Task Online Convex Optimization Under Random Link
Failures [5.513958040574729]
We develop a robust decentralized saddle-point algorithm against random link failures with heterogeneous probabilities.
We extend our algorithm and analysis to the two-point bandit feedback scenario.
arXiv Detail & Related papers (2024-01-04T00:57:33Z) - Private Networked Federated Learning for Nonsmooth Objectives [7.278228169713637]
This paper develops a networked federated learning algorithm to solve nonsmooth objective functions.
We use the zero-concentrated differential privacy notion (zCDP) to guarantee the confidentiality of the participants.
We provide complete theoretical proof for the privacy guarantees and the algorithm's convergence to the exact solution.
arXiv Detail & Related papers (2023-06-24T16:13:28Z) - Fast Optimal Locally Private Mean Estimation via Random Projections [58.603579803010796]
We study the problem of locally private mean estimation of high-dimensional vectors in the Euclidean ball.
We propose a new algorithmic framework, ProjUnit, for private mean estimation.
Our framework is deceptively simple: each randomizer projects its input to a random low-dimensional subspace, normalizes the result, and then runs an optimal algorithm.
arXiv Detail & Related papers (2023-06-07T14:07:35Z) - Deterministic Nonsmooth Nonconvex Optimization [94.01526844386977]
We show that randomization is necessary to obtain a dimension-free dimension-free algorithm.
Our algorithm yields the first deterministic dimension-free algorithm for optimizing ReLU networks.
arXiv Detail & Related papers (2023-02-16T13:57:19Z) - Private estimation algorithms for stochastic block models and mixture
models [63.07482515700984]
General tools for designing efficient private estimation algorithms.
First efficient $(epsilon, delta)$-differentially private algorithm for both weak recovery and exact recovery.
arXiv Detail & Related papers (2023-01-11T09:12:28Z) - Bandit Algorithms for Prophet Inequality and Pandora's Box [13.709418181148148]
We study the Prophet Inequality and Pandora's Box problems in the Multi-Armed Bandits model.
Our results give near-optimal $tildeO(mathsfpoly(n)sqrtT)$ total regret algorithms for both Prophet Inequality and Pandora's Box.
arXiv Detail & Related papers (2022-11-16T00:10:35Z) - Double Coverage with Machine-Learned Advice [100.23487145400833]
We study the fundamental online $k$-server problem in a learning-augmented setting.
We show that our algorithm achieves for any k an almost optimal consistency-robustness tradeoff.
arXiv Detail & Related papers (2021-03-02T11:04:33Z) - Stochastic Saddle-Point Optimization for Wasserstein Barycenters [69.68068088508505]
We consider the populationimation barycenter problem for random probability measures supported on a finite set of points and generated by an online stream of data.
We employ the structure of the problem and obtain a convex-concave saddle-point reformulation of this problem.
In the setting when the distribution of random probability measures is discrete, we propose an optimization algorithm and estimate its complexity.
arXiv Detail & Related papers (2020-06-11T19:40:38Z) - A New Minimax Theorem for Randomized Algorithms [1.2284934135116514]
We introduce a new type of minimax theorem which can provide a hard distribution $mu$ that works for all bias levels at once.
We show that this works for randomized query complexity, randomized communication complexity, approximate degreelemma, and approximate logrank.
We also prove an improved version of Impagliazzo's hardcore.
arXiv Detail & Related papers (2020-02-25T11:46:08Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.