The Exact Sample Complexity Gain from Invariances for Kernel Regression
- URL: http://arxiv.org/abs/2303.14269v2
- Date: Mon, 6 Nov 2023 05:07:05 GMT
- Title: The Exact Sample Complexity Gain from Invariances for Kernel Regression
- Authors: Behrooz Tahmasebi, Stefanie Jegelka
- Abstract summary: In practice, encoding invariances into models improves sample complexity.
We provide minimax optimal rates for kernel ridge regression on compact manifold.
Our results hold for any smooth compact Lie group action, even groups of positive dimension.
- Score: 37.74032673086741
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: In practice, encoding invariances into models improves sample complexity. In
this work, we study this phenomenon from a theoretical perspective. In
particular, we provide minimax optimal rates for kernel ridge regression on
compact manifolds, with a target function that is invariant to a group action
on the manifold. Our results hold for any smooth compact Lie group action, even
groups of positive dimension. For a finite group, the gain effectively
multiplies the number of samples by the group size. For groups of positive
dimension, the gain is observed by a reduction in the manifold's dimension, in
addition to a factor proportional to the volume of the quotient space. Our
proof takes the viewpoint of differential geometry, in contrast to the more
common strategy of using invariant polynomials. This new geometric viewpoint on
learning with invariances may be of independent interest.
Related papers
- Global optimality under amenable symmetry constraints [0.5656581242851759]
We show the interplay between convexity, the group, and the underlying vector space, which is typically infinite-dimensional.
We apply this toolkit to the invariant optimality problem.
It yields new results on invariant kernel mean embeddings and risk-optimal invariant couplings.
arXiv Detail & Related papers (2024-02-12T12:38:20Z) - Sample Complexity Bounds for Estimating Probability Divergences under Invariances [31.946304450935628]
Group-invariant probability distributions appear in many data-generative models in machine learning.
In this work, we study how the inherent invariances, with respect to any smooth action of a Lie group on a manifold, improve sample complexity.
Results are completely new for groups of positive dimension and extend recent bounds for finite group actions.
arXiv Detail & Related papers (2023-11-06T04:45:21Z) - Scaling Riemannian Diffusion Models [68.52820280448991]
We show that our method enables us to scale to high dimensional tasks on nontrivial manifold.
We model QCD densities on $SU(n)$ lattices and contrastively learned embeddings on high dimensional hyperspheres.
arXiv Detail & Related papers (2023-10-30T21:27:53Z) - Lie Group Decompositions for Equivariant Neural Networks [12.139222986297261]
We show how convolution kernels can be parametrized to build models equivariant with respect to affine transformations.
We evaluate the robustness and out-of-distribution generalisation capability of our model on the benchmark affine-invariant classification task.
arXiv Detail & Related papers (2023-10-17T16:04:33Z) - Conformal inference for regression on Riemannian Manifolds [49.7719149179179]
We investigate prediction sets for regression scenarios when the response variable, denoted by $Y$, resides in a manifold, and the covariable, denoted by X, lies in Euclidean space.
We prove the almost sure convergence of the empirical version of these regions on the manifold to their population counterparts.
arXiv Detail & Related papers (2023-10-12T10:56:25Z) - Geometric Neural Diffusion Processes [55.891428654434634]
We extend the framework of diffusion models to incorporate a series of geometric priors in infinite-dimension modelling.
We show that with these conditions, the generative functional model admits the same symmetry.
arXiv Detail & Related papers (2023-07-11T16:51:38Z) - On the Sample Complexity of Learning with Geometric Stability [42.813141600050166]
We study the sample complexity of learning problems where the target function presents such invariance and stability properties.
We provide non-parametric rates of convergence for kernel methods, and improvements in sample complexity by a factor equal to the size of the group.
arXiv Detail & Related papers (2021-06-14T03:51:16Z) - LieTransformer: Equivariant self-attention for Lie Groups [49.9625160479096]
Group equivariant neural networks are used as building blocks of group invariant neural networks.
We extend the scope of the literature to self-attention, that is emerging as a prominent building block of deep learning models.
We propose the LieTransformer, an architecture composed of LieSelfAttention layers that are equivariant to arbitrary Lie groups and their discrete subgroups.
arXiv Detail & Related papers (2020-12-20T11:02:49Z) - Disentangling by Subspace Diffusion [72.1895236605335]
We show that fully unsupervised factorization of a data manifold is possible if the true metric of the manifold is known.
Our work reduces the question of whether unsupervised metric learning is possible, providing a unifying insight into the geometric nature of representation learning.
arXiv Detail & Related papers (2020-06-23T13:33:19Z) - Sample complexity and effective dimension for regression on manifolds [13.774258153124205]
We consider the theory of regression on a manifold using kernel reproducing Hilbert space methods.
We show that certain spaces of smooth functions on a manifold are effectively finite-dimensional, with a complexity that scales according to the manifold dimension.
arXiv Detail & Related papers (2020-06-13T14:09:55Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.