The Sample Complexity of Distribution-Free Parity Learning in the Robust
Shuffle Model
- URL: http://arxiv.org/abs/2103.15690v1
- Date: Mon, 29 Mar 2021 15:26:02 GMT
- Title: The Sample Complexity of Distribution-Free Parity Learning in the Robust
Shuffle Model
- Authors: Kobbi Nissim and Chao Yan
- Abstract summary: We show that the sample complexity of learning $d$-bit parity functions is $Omega (2d/2)$.
We also sketch a simple shuffle model protocol demonstrating that our results are tight up to $poly(d)$ factors.
- Score: 11.821892196198457
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: We provide a lowerbound on the sample complexity of distribution-free parity
learning in the realizable case in the shuffle model of differential privacy.
Namely, we show that the sample complexity of learning $d$-bit parity functions
is $\Omega(2^{d/2})$. Our result extends a recent similar lowerbound on the
sample complexity of private agnostic learning of parity functions in the
shuffle model by Cheu and Ullman. We also sketch a simple shuffle model
protocol demonstrating that our results are tight up to $poly(d)$ factors.
Related papers
- Projection by Convolution: Optimal Sample Complexity for Reinforcement Learning in Continuous-Space MDPs [56.237917407785545]
We consider the problem of learning an $varepsilon$-optimal policy in a general class of continuous-space Markov decision processes (MDPs) having smooth Bellman operators.
Key to our solution is a novel projection technique based on ideas from harmonic analysis.
Our result bridges the gap between two popular but conflicting perspectives on continuous-space MDPs.
arXiv Detail & Related papers (2024-05-10T09:58:47Z) - Transductive Learning Is Compact [10.168670899305232]
We show a compactness result holding broadly across supervised learning with a general class of loss functions.
For realizable learning with improper metric losses, we show that exact compactness of sample complexity can fail.
We conjecture that larger gaps are possible for the agnostic case.
arXiv Detail & Related papers (2024-02-15T23:10:45Z) - The sample complexity of multi-distribution learning [17.45683822446751]
We show that an algorithm of sample complexity $widetildeO((d+k)epsilon-2) cdot (k/epsilon)o(1)$ resolves the COLT 2023 open problem of Awasthi, Haghtalab and Zhao.
arXiv Detail & Related papers (2023-12-07T03:53:17Z) - Information-Computation Tradeoffs for Learning Margin Halfspaces with
Random Classification Noise [50.64137465792738]
We study the problem of PAC $gamma$-margin halfspaces with Random Classification Noise.
We establish an information-computation tradeoff suggesting an inherent gap between the sample complexity of the problem and the sample complexity of computationally efficient algorithms.
arXiv Detail & Related papers (2023-06-28T16:33:39Z) - A Finite Sample Complexity Bound for Distributionally Robust Q-learning [17.96094201655567]
We consider a reinforcement learning setting in which the deployment environment is different from the training environment.
Applying a robust Markov decision processes formulation, we extend the distributionally robust $Q$-learning framework studied in Liu et al.
This is the first sample complexity result for the model-free robust RL problem.
arXiv Detail & Related papers (2023-02-26T01:15:32Z) - On-Demand Sampling: Learning Optimally from Multiple Distributions [63.20009081099896]
Social and real-world considerations have given rise to multi-distribution learning paradigms.
We establish the optimal sample complexity of these learning paradigms and give algorithms that meet this sample complexity.
Our algorithm design and analysis are enabled by our extensions of online learning techniques for solving zero-sum games.
arXiv Detail & Related papers (2022-10-22T19:07:26Z) - Sample Complexity Bounds for Robustly Learning Decision Lists against
Evasion Attacks [25.832511407411637]
A fundamental problem in adversarial machine learning is to quantify how much training data is needed in the presence of evasion attacks.
We work with probability distributions on the input data that satisfy a Lipschitz condition: nearby points have similar probability.
For every fixed $k$ the class of $k$-decision lists has sample complexity against a $log(n)$-bounded adversary.
arXiv Detail & Related papers (2022-05-12T14:40:18Z) - Forster Decomposition and Learning Halfspaces with Noise [60.691817861402676]
A Forster transform is an operation that turns a distribution into one with good anti-concentration properties.
We show that any distribution can be decomposed efficiently as a disjoint mixture of few distributions for which a Forster transform exists and can be computed efficiently.
arXiv Detail & Related papers (2021-07-12T17:00:59Z) - The Complexity of Adversarially Robust Proper Learning of Halfspaces
with Agnostic Noise [67.27523616312428]
We study the computational complexity of adversarially robust proper learning of halfspaces in the distribution-independent PAC model.
We give a computationally efficient learning algorithm and a nearly matching computational hardness result for this problem.
arXiv Detail & Related papers (2020-07-30T04:18:51Z) - FANOK: Knockoffs in Linear Time [73.5154025911318]
We describe a series of algorithms that efficiently implement Gaussian model-X knockoffs to control the false discovery rate on large scale feature selection problems.
We test our methods on problems with $p$ as large as $500,000$.
arXiv Detail & Related papers (2020-06-15T21:55:34Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.