Applications of Littlestone dimension to query learning and to
compression
- URL: http://arxiv.org/abs/2310.04812v1
- Date: Sat, 7 Oct 2023 14:04:18 GMT
- Title: Applications of Littlestone dimension to query learning and to
compression
- Authors: Hunter Chase and James Freitag and Lev Reyzin
- Abstract summary: We extend the model of citeangluin 2017power for learning by equivalence queries with random counterexamples.
Second, we extend that model to infinite concept classes with an additional source of randomness.
Third, we give improved results on the relationship of Littlestone dimension to classes with extended $d$-compression schemes.
- Score: 1.9336815376402723
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: In this paper we give several applications of Littlestone dimension. The
first is to the model of \cite{angluin2017power}, where we extend their results
for learning by equivalence queries with random counterexamples. Second, we
extend that model to infinite concept classes with an additional source of
randomness. Third, we give improved results on the relationship of Littlestone
dimension to classes with extended $d$-compression schemes, proving a strong
version of a conjecture of \cite{floyd1995sample} for Littlestone dimension.
Related papers
- Disentangled Representation Learning with the Gromov-Monge Gap [65.73194652234848]
Learning disentangled representations from unlabelled data is a fundamental challenge in machine learning.
We introduce a novel approach to disentangled representation learning based on quadratic optimal transport.
We demonstrate the effectiveness of our approach for quantifying disentanglement across four standard benchmarks.
arXiv Detail & Related papers (2024-07-10T16:51:32Z) - Optimal Multi-Distribution Learning [88.3008613028333]
Multi-distribution learning seeks to learn a shared model that minimizes the worst-case risk across $k$ distinct data distributions.
We propose a novel algorithm that yields an varepsilon-optimal randomized hypothesis with a sample complexity on the order of (d+k)/varepsilon2.
arXiv Detail & Related papers (2023-12-08T16:06:29Z) - Learning with Explanation Constraints [91.23736536228485]
We provide a learning theoretic framework to analyze how explanations can improve the learning of our models.
We demonstrate the benefits of our approach over a large array of synthetic and real-world experiments.
arXiv Detail & Related papers (2023-03-25T15:06:47Z) - Unified Convergence Theory of Stochastic and Variance-Reduced Cubic Newton Methods [37.1630298053787]
We propose a new framework, which we call the helper framework.
It provides a unified view of the variance and second-order algorithms equipped with global complexity guarantees.
arXiv Detail & Related papers (2023-02-23T12:18:28Z) - The unstable formula theorem revisited via algorithms [17.736645608595758]
We develop a complete algorithmic analogue of Shelah's celebrated Unstable Formula Theorem.
We describe Littlestone classes via approximations, by analogy to definability of types in model theory.
arXiv Detail & Related papers (2022-12-09T18:53:34Z) - Improved Inapproximability of VC Dimension and Littlestone's Dimension
via (Unbalanced) Biclique [28.57552551316786]
We give a simple reduction from Maximum (Unbalanced) Biclique problem to approximating VC Dimension and Littlestone's Dimension.
With this connection, we derive a range of hardness of approximation results and running time lower bounds.
arXiv Detail & Related papers (2022-11-02T19:23:42Z) - Sample-efficient proper PAC learning with approximate differential
privacy [51.09425023771246]
We prove that the sample complexity of properly learning a class of Littlestone dimension $d$ with approximate differential privacy is $tilde O(d6)$, ignoring privacy and accuracy parameters.
This result answers a question of Bun et al. (FOCS 2020) by improving upon their upper bound of $2O(d)$ on the sample complexity.
arXiv Detail & Related papers (2020-12-07T18:17:46Z) - Near-tight closure bounds for Littlestone and threshold dimensions [59.245101116396555]
We study closure properties for the Littlestone and threshold dimensions of binary hypothesis classes.
Our upper bounds give an exponential (in $k$) improvement upon analogous bounds shown by Alon et al.
arXiv Detail & Related papers (2020-07-07T17:56:06Z) - Compressing Large Sample Data for Discriminant Analysis [78.12073412066698]
We consider the computational issues due to large sample size within the discriminant analysis framework.
We propose a new compression approach for reducing the number of training samples for linear and quadratic discriminant analysis.
arXiv Detail & Related papers (2020-05-08T05:09:08Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.