A Gapped Scale-Sensitive Dimension and Lower Bounds for Offset Rademacher Complexity
- URL: http://arxiv.org/abs/2509.20618v1
- Date: Wed, 24 Sep 2025 23:49:53 GMT
- Title: A Gapped Scale-Sensitive Dimension and Lower Bounds for Offset Rademacher Complexity
- Authors: Zeyu Jia, Yury Polyanskiy, Alexander Rakhlin,
- Abstract summary: We study gapped scale-sensitive dimensions of a function class in both sequential and non-sequential settings.<n>We show that the gapped dimensions lead to lower bounds on offset Rademacher averages.
- Score: 72.82374764881489
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: We study gapped scale-sensitive dimensions of a function class in both sequential and non-sequential settings. We demonstrate that covering numbers for any uniformly bounded class are controlled above by these gapped dimensions, generalizing the results of \cite{anthony2000function,alon1997scale}. Moreover, we show that the gapped dimensions lead to lower bounds on offset Rademacher averages, thereby strengthening existing approaches for proving lower bounds on rates of convergence in statistical and online learning.
Related papers
- Leveraging Non-linear Dimension Reduction and Random Walk Co-occurrence for Node Embedding [0.0]
COVE is an explainable high dimensional embedding that, when reduced to low dimension with UMAP, slightly increases performance on clustering and link prediction tasks.<n>We find that a COVE UMAP HDBSCAN pipeline performs similarly to the popular Louvain algorithm.
arXiv Detail & Related papers (2026-02-27T14:58:49Z) - Assouad, Fano, and Le Cam with Interaction: A Unifying Lower Bound Framework and Characterization for Bandit Learnability [71.82666334363174]
We develop a unifying framework for information-theoretic lower bound in statistical estimation and interactive decision making.<n>We propose a new lower bound approach called emphinteractive Fano methodinteractive.
arXiv Detail & Related papers (2024-10-07T15:14:58Z) - Randomized Midpoint Method for Log-Concave Sampling under Constraints [5.548787731232499]
We study the problem of sampling from log-concave distributions supported on convex, compact sets.<n>We propose a unified proximal framework for handling constraints via a broad class of projection operators.
arXiv Detail & Related papers (2024-05-24T09:24:21Z) - An Improved Uniform Convergence Bound with Fat-Shattering Dimension [4.5349436061325425]
The state-of-the-art upper bounds feature a multiplicative squared logarithmic factor on the sample complexity, leaving an open gap with the existing lower bound.
We provide an improved uniform convergence bound that closes this gap.
arXiv Detail & Related papers (2023-07-13T09:20:57Z) - Last-Iterate Convergence of Adaptive Riemannian Gradient Descent for Equilibrium Computation [52.73824786627612]
This paper establishes new convergence results for textitgeodesic strongly monotone games.<n>Our key result shows that RGD attains last-iterate linear convergence in a textitgeometry-agnostic fashion.<n>Overall, this paper presents the first geometry-agnostic last-iterate convergence analysis for games beyond the Euclidean settings.
arXiv Detail & Related papers (2023-06-29T01:20:44Z) - General Loss Functions Lead to (Approximate) Interpolation in High Dimensions [5.653716495767272]
We provide a unified framework that applies to a general family of convex losses across binary and multiclass settings.<n>Specifically, we show that the implicit bias is approximated (but not exactly equal to) the minimum-norm in high dimensions.
arXiv Detail & Related papers (2023-03-13T21:23:12Z) - Semi-Supervised Subspace Clustering via Tensor Low-Rank Representation [64.49871502193477]
We propose a novel semi-supervised subspace clustering method, which is able to simultaneously augment the initial supervisory information and construct a discriminative affinity matrix.
Comprehensive experimental results on six commonly-used benchmark datasets demonstrate the superiority of our method over state-of-the-art methods.
arXiv Detail & Related papers (2022-05-21T01:47:17Z) - Unified lower bounds for interactive high-dimensional estimation under
information constraints [40.339506154827106]
We provide a unified framework enabling us to derive a variety of (tight) minimax lower bounds for different parametric families of distributions.
Our lower bound framework is versatile and yields "plug-and-play" bounds that are widely applicable to a large range of estimation problems.
arXiv Detail & Related papers (2020-10-13T17:25:19Z) - Random extrapolation for primal-dual coordinate descent [61.55967255151027]
We introduce a randomly extrapolated primal-dual coordinate descent method that adapts to sparsity of the data matrix and the favorable structures of the objective function.
We show almost sure convergence of the sequence and optimal sublinear convergence rates for the primal-dual gap and objective values, in the general convex-concave case.
arXiv Detail & Related papers (2020-07-13T17:39:35Z) - Near-tight closure bounds for Littlestone and threshold dimensions [59.245101116396555]
We study closure properties for the Littlestone and threshold dimensions of binary hypothesis classes.
Our upper bounds give an exponential (in $k$) improvement upon analogous bounds shown by Alon et al.
arXiv Detail & Related papers (2020-07-07T17:56:06Z) - Manifold Learning via Manifold Deflation [105.7418091051558]
dimensionality reduction methods provide a valuable means to visualize and interpret high-dimensional data.
Many popular methods can fail dramatically, even on simple two-dimensional Manifolds.
This paper presents an embedding method for a novel, incremental tangent space estimator that incorporates global structure as coordinates.
Empirically, we show our algorithm recovers novel and interesting embeddings on real-world and synthetic datasets.
arXiv Detail & Related papers (2020-07-07T10:04:28Z) - The role of regularization in classification of high-dimensional noisy
Gaussian mixture [36.279288150100875]
We consider a high-dimensional mixture of two Gaussians in the noisy regime.
We provide a rigorous analysis of the generalization error of regularized convex classifiers.
We discuss surprising effects of the regularization that in some cases allows to reach the Bayes-optimal performances.
arXiv Detail & Related papers (2020-02-26T14:54:28Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.