Near-Optimal Degree Testing for Bayes Nets
- URL: http://arxiv.org/abs/2304.06733v1
- Date: Thu, 13 Apr 2023 03:48:31 GMT
- Title: Near-Optimal Degree Testing for Bayes Nets
- Authors: Vipul Arora, Arnab Bhattacharyya, Cl\'ement L. Canonne, Joy Qiping
Yang
- Abstract summary: We show that the sample of complexity of the problem is $tildeTheta (2n/2/varepsilon2)$.
Our algorithm relies on a testing-by-learning framework, previously used to obtain sample-optimal testers.
- Score: 6.814860712547177
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: This paper considers the problem of testing the maximum in-degree of the
Bayes net underlying an unknown probability distribution $P$ over $\{0,1\}^n$,
given sample access to $P$. We show that the sample complexity of the problem
is $\tilde{\Theta}(2^{n/2}/\varepsilon^2)$. Our algorithm relies on a
testing-by-learning framework, previously used to obtain sample-optimal
testers; in order to apply this framework, we develop new algorithms for
``near-proper'' learning of Bayes nets, and high-probability learning under
$\chi^2$ divergence, which are of independent interest.
Related papers
- Fast Rates for Bandit PAC Multiclass Classification [73.17969992976501]
We study multiclass PAC learning with bandit feedback, where inputs are classified into one of $K$ possible labels and feedback is limited to whether or not the predicted labels are correct.
Our main contribution is in designing a novel learning algorithm for the agnostic $(varepsilon,delta)$PAC version of the problem.
arXiv Detail & Related papers (2024-06-18T08:54:04Z) - Testable Learning with Distribution Shift [9.036777309376697]
We define a new model called testable learning with distribution shift.
We obtain provably efficient algorithms for certifying the performance of a classifier on a test distribution.
We give several positive results for learning concept classes such as halfspaces, intersections of halfspaces, and decision trees.
arXiv Detail & Related papers (2023-11-25T23:57:45Z) - Near-Optimal Bounds for Learning Gaussian Halfspaces with Random
Classification Noise [50.64137465792738]
We show that any efficient SQ algorithm for the problem requires sample complexity at least $Omega(d1/2/(maxp, epsilon)2)$.
Our lower bound suggests that this quadratic dependence on $1/epsilon$ is inherent for efficient algorithms.
arXiv Detail & Related papers (2023-07-13T18:59:28Z) - A Finite Sample Complexity Bound for Distributionally Robust Q-learning [16.88062487980405]
We consider a reinforcement learning setting in which the deployment environment is different from the training environment.
Applying a robust Markov decision processes formulation, we extend the distributionally robust $Q$-learning framework studied in Liu et al.
This is the first sample complexity result for the model-free robust RL problem.
arXiv Detail & Related papers (2023-02-26T01:15:32Z) - Stochastic Approximation Approaches to Group Distributionally Robust
Optimization [96.26317627118912]
Group distributionally robust optimization (GDRO)
Online learning techniques to reduce the number of samples required in each round from $m$ to $1$, keeping the same sample.
A novel formulation of weighted GDRO, which allows us to derive distribution-dependent convergence rates.
arXiv Detail & Related papers (2023-02-18T09:24:15Z) - Statistically Near-Optimal Hypothesis Selection [33.83129262033921]
We derive an optimal $2$-approximation learning strategy for the Hypothesis Selection problem.
This is the first algorithm that simultaneously achieves the best approximation factor and sample complexity.
arXiv Detail & Related papers (2021-08-17T21:11:20Z) - Optimal Testing of Discrete Distributions with High Probability [49.19942805582874]
We study the problem of testing discrete distributions with a focus on the high probability regime.
We provide the first algorithms for closeness and independence testing that are sample-optimal, within constant factors.
arXiv Detail & Related papers (2020-09-14T16:09:17Z) - Learning Halfspaces with Tsybakov Noise [50.659479930171585]
We study the learnability of halfspaces in the presence of Tsybakov noise.
We give an algorithm that achieves misclassification error $epsilon$ with respect to the true halfspace.
arXiv Detail & Related papers (2020-06-11T14:25:02Z) - Learning and Testing Junta Distributions with Subcube Conditioning [11.464622619555222]
We study the problems of learning and testing junta distributions on $-1,1n$ with respect to the uniform distribution.
The main contribution is an algorithm for finding relevant coordinates in a $k$-junta distribution with subcube conditioning.
arXiv Detail & Related papers (2020-04-26T22:52:53Z) - Locally Private Hypothesis Selection [96.06118559817057]
We output a distribution from $mathcalQ$ whose total variation distance to $p$ is comparable to the best such distribution.
We show that the constraint of local differential privacy incurs an exponential increase in cost.
Our algorithms result in exponential improvements on the round complexity of previous methods.
arXiv Detail & Related papers (2020-02-21T18:30:48Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.