Sample complexity and effective dimension for regression on manifolds
- URL: http://arxiv.org/abs/2006.07642v3
- Date: Fri, 16 Oct 2020 14:58:46 GMT
- Title: Sample complexity and effective dimension for regression on manifolds
- Authors: Andrew McRae and Justin Romberg and Mark Davenport
- Abstract summary: We consider the theory of regression on a manifold using kernel reproducing Hilbert space methods.
We show that certain spaces of smooth functions on a manifold are effectively finite-dimensional, with a complexity that scales according to the manifold dimension.
- Score: 13.774258153124205
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We consider the theory of regression on a manifold using reproducing kernel
Hilbert space methods. Manifold models arise in a wide variety of modern
machine learning problems, and our goal is to help understand the effectiveness
of various implicit and explicit dimensionality-reduction methods that exploit
manifold structure. Our first key contribution is to establish a novel
nonasymptotic version of the Weyl law from differential geometry. From this we
are able to show that certain spaces of smooth functions on a manifold are
effectively finite-dimensional, with a complexity that scales according to the
manifold dimension rather than any ambient data dimension. Finally, we show
that given (potentially noisy) function values taken uniformly at random over a
manifold, a kernel regression estimator (derived from the spectral
decomposition of the manifold) yields minimax-optimal error bounds that are
controlled by the effective dimension.
Related papers
- Scaling Riemannian Diffusion Models [68.52820280448991]
We show that our method enables us to scale to high dimensional tasks on nontrivial manifold.
We model QCD densities on $SU(n)$ lattices and contrastively learned embeddings on high dimensional hyperspheres.
arXiv Detail & Related papers (2023-10-30T21:27:53Z) - Manifold Learning by Mixture Models of VAEs for Inverse Problems [1.5749416770494704]
We learn a mixture model of variational autoencoders to represent a manifold of arbitrary topology.
We use it for solving inverse problems by minimizing a data fidelity term restricted to the learned manifold.
We demonstrate the performance of our method for low-dimensional toy examples as well as for deblurring and electrical impedance tomography.
arXiv Detail & Related papers (2023-03-27T14:29:04Z) - The Exact Sample Complexity Gain from Invariances for Kernel Regression [37.74032673086741]
In practice, encoding invariances into models improves sample complexity.
We provide minimax optimal rates for kernel ridge regression on compact manifold.
Our results hold for any smooth compact Lie group action, even groups of positive dimension.
arXiv Detail & Related papers (2023-03-24T20:47:31Z) - Convolutional Filtering on Sampled Manifolds [122.06927400759021]
We show that convolutional filtering on a sampled manifold converges to continuous manifold filtering.
Our findings are further demonstrated empirically on a problem of navigation control.
arXiv Detail & Related papers (2022-11-20T19:09:50Z) - Measuring dissimilarity with diffeomorphism invariance [94.02751799024684]
We introduce DID, a pairwise dissimilarity measure applicable to a wide range of data spaces.
We prove that DID enjoys properties which make it relevant for theoretical study and practical use.
arXiv Detail & Related papers (2022-02-11T13:51:30Z) - Inferring Manifolds From Noisy Data Using Gaussian Processes [17.166283428199634]
Most existing manifold learning algorithms replace the original data with lower dimensional coordinates.
This article proposes a new methodology for addressing these problems, allowing the estimated manifold between fitted data points.
arXiv Detail & Related papers (2021-10-14T15:50:38Z) - Manifold Learning via Manifold Deflation [105.7418091051558]
dimensionality reduction methods provide a valuable means to visualize and interpret high-dimensional data.
Many popular methods can fail dramatically, even on simple two-dimensional Manifolds.
This paper presents an embedding method for a novel, incremental tangent space estimator that incorporates global structure as coordinates.
Empirically, we show our algorithm recovers novel and interesting embeddings on real-world and synthetic datasets.
arXiv Detail & Related papers (2020-07-07T10:04:28Z) - Disentangling by Subspace Diffusion [72.1895236605335]
We show that fully unsupervised factorization of a data manifold is possible if the true metric of the manifold is known.
Our work reduces the question of whether unsupervised metric learning is possible, providing a unifying insight into the geometric nature of representation learning.
arXiv Detail & Related papers (2020-06-23T13:33:19Z) - Learning to Guide Random Search [111.71167792453473]
We consider derivative-free optimization of a high-dimensional function that lies on a latent low-dimensional manifold.
We develop an online learning approach that learns this manifold while performing the optimization.
We empirically evaluate the method on continuous optimization benchmarks and high-dimensional continuous control problems.
arXiv Detail & Related papers (2020-04-25T19:21:14Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.