Near-tight closure bounds for Littlestone and threshold dimensions
- URL: http://arxiv.org/abs/2007.03668v1
- Date: Tue, 7 Jul 2020 17:56:06 GMT
- Title: Near-tight closure bounds for Littlestone and threshold dimensions
- Authors: Badih Ghazi, Noah Golowich, Ravi Kumar, Pasin Manurangsi
- Abstract summary: We study closure properties for the Littlestone and threshold dimensions of binary hypothesis classes.
Our upper bounds give an exponential (in $k$) improvement upon analogous bounds shown by Alon et al.
- Score: 59.245101116396555
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We study closure properties for the Littlestone and threshold dimensions of
binary hypothesis classes. Given classes $\mathcal{H}_1, \ldots, \mathcal{H}_k$
of Boolean functions with bounded Littlestone (respectively, threshold)
dimension, we establish an upper bound on the Littlestone (respectively,
threshold) dimension of the class defined by applying an arbitrary binary
aggregation rule to $\mathcal{H}_1, \ldots, \mathcal{H}_k$. We also show that
our upper bounds are nearly tight. Our upper bounds give an exponential (in
$k$) improvement upon analogous bounds shown by Alon et al. (COLT 2020), thus
answering a question posed by their work.
Related papers
- Local Borsuk-Ulam, Stability, and Replicability [16.6959756920423]
We show that both list-replicable and globally stable learning are impossible in the PAC setting.
Specifically, we establish optimal bounds for list replicability and global stability numbers in finite classes.
arXiv Detail & Related papers (2023-11-02T21:10:16Z) - Applications of Littlestone dimension to query learning and to
compression [1.9336815376402723]
We extend the model of citeangluin 2017power for learning by equivalence queries with random counterexamples.
Second, we extend that model to infinite concept classes with an additional source of randomness.
Third, we give improved results on the relationship of Littlestone dimension to classes with extended $d$-compression schemes.
arXiv Detail & Related papers (2023-10-07T14:04:18Z) - Prediction, Learning, Uniform Convergence, and Scale-sensitive
Dimensions [39.97534972432276]
We present a new general-purpose algorithm for learning classes of $[0,1]$-valued functions.
We prove a general upper bound on the expected absolute error of this algorithm.
We show how to apply both packing bounds to obtain improved general bounds on the sample complexity of learning.
arXiv Detail & Related papers (2023-04-21T15:51:35Z) - Lower Bounds for Learning in Revealing POMDPs [88.23337313766355]
This paper studies the fundamental limits of reinforcement learning (RL) in the challenging emphpartially observable setting.
For emphmulti-step revealing POMDPs, we show that the latent state-space dependence is at least $Omega(S1.5)$ in the sample complexity.
arXiv Detail & Related papers (2023-02-02T18:59:30Z) - Quantum and classical low-degree learning via a dimension-free Remez
inequality [52.12931955662553]
We show a new way to relate functions on the hypergrid to their harmonic extensions over the polytorus.
We show the supremum of a function $f$ over products of the cyclic group $exp(2pi i k/K)_k=1K$.
We extend to new spaces a recent line of work citeEI22, CHP, VZ22 that gave similarly efficient methods for learning low-degrees on hypercubes and observables on qubits.
arXiv Detail & Related papers (2023-01-04T04:15:40Z) - Improved Inapproximability of VC Dimension and Littlestone's Dimension
via (Unbalanced) Biclique [28.57552551316786]
We give a simple reduction from Maximum (Unbalanced) Biclique problem to approximating VC Dimension and Littlestone's Dimension.
With this connection, we derive a range of hardness of approximation results and running time lower bounds.
arXiv Detail & Related papers (2022-11-02T19:23:42Z) - Minimax Optimal Quantization of Linear Models: Information-Theoretic
Limits and Efficient Algorithms [59.724977092582535]
We consider the problem of quantizing a linear model learned from measurements.
We derive an information-theoretic lower bound for the minimax risk under this setting.
We show that our method and upper-bounds can be extended for two-layer ReLU neural networks.
arXiv Detail & Related papers (2022-02-23T02:39:04Z) - Sample-efficient proper PAC learning with approximate differential
privacy [51.09425023771246]
We prove that the sample complexity of properly learning a class of Littlestone dimension $d$ with approximate differential privacy is $tilde O(d6)$, ignoring privacy and accuracy parameters.
This result answers a question of Bun et al. (FOCS 2020) by improving upon their upper bound of $2O(d)$ on the sample complexity.
arXiv Detail & Related papers (2020-12-07T18:17:46Z) - On Suboptimality of Least Squares with Application to Estimation of
Convex Bodies [74.39616164169131]
We settle an open problem regarding optimality of Least Squares in estimating a convex set from noisy support function measurements in dimension $dgeq 6$.
We establish that Least Squares is sub-optimal, and achieves a rate of $tildeTheta_d(n-2/(d-1))$ whereas the minimax rate is $Theta_d(n-4/(d+3))$.
arXiv Detail & Related papers (2020-06-07T05:19:00Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.