Improved Inapproximability of VC Dimension and Littlestone's Dimension
via (Unbalanced) Biclique
- URL: http://arxiv.org/abs/2211.01443v1
- Date: Wed, 2 Nov 2022 19:23:42 GMT
- Title: Improved Inapproximability of VC Dimension and Littlestone's Dimension
via (Unbalanced) Biclique
- Authors: Pasin Manurangsi
- Abstract summary: We give a simple reduction from Maximum (Unbalanced) Biclique problem to approximating VC Dimension and Littlestone's Dimension.
With this connection, we derive a range of hardness of approximation results and running time lower bounds.
- Score: 28.57552551316786
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We study the complexity of computing (and approximating) VC Dimension and
Littlestone's Dimension when we are given the concept class explicitly. We give
a simple reduction from Maximum (Unbalanced) Biclique problem to approximating
VC Dimension and Littlestone's Dimension. With this connection, we derive a
range of hardness of approximation results and running time lower bounds. For
example, under the (randomized) Gap-Exponential Time Hypothesis or the
Strongish Planted Clique Hypothesis, we show a tight inapproximability result:
both dimensions are hard to approximate to within a factor of $o(\log n)$ in
polynomial-time. These improve upon constant-factor inapproximability results
from [Manurangsi and Rubinstein, COLT 2017].
Related papers
- On Reductions and Representations of Learning Problems in Euclidean Spaces [15.07787640047213]
Many practical prediction algorithms represent inputs in Euclidean space and replace the discrete 0/1 classification loss with a real-trivial surrogate loss.
We develop a generalization of the Borsuk-Ulam Theorem that combines the classical topological approach with convexity considerations.
We also present natural classification tasks that can be represented in much smaller dimensions by leveraging randomness.
arXiv Detail & Related papers (2024-11-16T12:09:37Z) - A Sample Efficient Alternating Minimization-based Algorithm For Robust Phase Retrieval [56.67706781191521]
In this work, we present a robust phase retrieval problem where the task is to recover an unknown signal.
Our proposed oracle avoids the need for computationally spectral descent, using a simple gradient step and outliers.
arXiv Detail & Related papers (2024-09-07T06:37:23Z) - Applications of Littlestone dimension to query learning and to
compression [1.9336815376402723]
We extend the model of citeangluin 2017power for learning by equivalence queries with random counterexamples.
Second, we extend that model to infinite concept classes with an additional source of randomness.
Third, we give improved results on the relationship of Littlestone dimension to classes with extended $d$-compression schemes.
arXiv Detail & Related papers (2023-10-07T14:04:18Z) - Lower Bounds for Learning in Revealing POMDPs [88.23337313766355]
This paper studies the fundamental limits of reinforcement learning (RL) in the challenging emphpartially observable setting.
For emphmulti-step revealing POMDPs, we show that the latent state-space dependence is at least $Omega(S1.5)$ in the sample complexity.
arXiv Detail & Related papers (2023-02-02T18:59:30Z) - Dimensionality Reduction for Wasserstein Barycenter [6.327655795051619]
We study dimensionality reduction techniques for the Wasserstein barycenter problem.
We show that randomized dimensionality reduction can be used to map the problem to a space of dimension $O(log n)$ independent of both $d$ and $k$.
We also provide a coreset construction for the Wasserstein barycenter problem that significantly decreases the number of input distributions.
arXiv Detail & Related papers (2021-10-18T02:57:25Z) - A Fully Problem-Dependent Regret Lower Bound for Finite-Horizon MDPs [117.82903457289584]
We derive a novel problem-dependent lower-bound for regret in finite-horizon Markov Decision Processes (MDPs)
We show that our lower-bound is considerably smaller than in the general case and it does not scale with the minimum action gap at all.
We show that this last result is attainable (up to $poly(H)$ terms, where $H$ is the horizon) by providing a regret upper-bound based on policy gaps for an optimistic algorithm.
arXiv Detail & Related papers (2021-06-24T13:46:09Z) - Sample-efficient proper PAC learning with approximate differential
privacy [51.09425023771246]
We prove that the sample complexity of properly learning a class of Littlestone dimension $d$ with approximate differential privacy is $tilde O(d6)$, ignoring privacy and accuracy parameters.
This result answers a question of Bun et al. (FOCS 2020) by improving upon their upper bound of $2O(d)$ on the sample complexity.
arXiv Detail & Related papers (2020-12-07T18:17:46Z) - Near-tight closure bounds for Littlestone and threshold dimensions [59.245101116396555]
We study closure properties for the Littlestone and threshold dimensions of binary hypothesis classes.
Our upper bounds give an exponential (in $k$) improvement upon analogous bounds shown by Alon et al.
arXiv Detail & Related papers (2020-07-07T17:56:06Z) - Interpolation and Learning with Scale Dependent Kernels [91.41836461193488]
We study the learning properties of nonparametric ridge-less least squares.
We consider the common case of estimators defined by scale dependent kernels.
arXiv Detail & Related papers (2020-06-17T16:43:37Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.