Max-sliced Wasserstein concentration and uniform ratio bounds of empirical measures on RKHS
- URL: http://arxiv.org/abs/2405.13153v1
- Date: Tue, 21 May 2024 18:47:43 GMT
- Title: Max-sliced Wasserstein concentration and uniform ratio bounds of empirical measures on RKHS
- Authors: Ruiyu Han, Cynthia Rush, Johannes Wiesel,
- Abstract summary: Optimal transport and the Wasserstein distance $mathcalW_p$ have recently seen a number of applications in the fields of statistics, machine learning, data science, and the physical sciences.
These applications are however severely restricted by the curse of dimensionality, meaning that the number of data points needed to estimate these problems accurately increases exponentially in the dimension.
We focus here on one of these variants, namely the max-sliced Wasserstein metric $overlinemathcalW_p$.
- Score: 9.783697404304025
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Optimal transport and the Wasserstein distance $\mathcal{W}_p$ have recently seen a number of applications in the fields of statistics, machine learning, data science, and the physical sciences. These applications are however severely restricted by the curse of dimensionality, meaning that the number of data points needed to estimate these problems accurately increases exponentially in the dimension. To alleviate this problem, a number of variants of $\mathcal{W}_p$ have been introduced. We focus here on one of these variants, namely the max-sliced Wasserstein metric $\overline{\mathcal{W}}_p$. This metric reduces the high-dimensional minimization problem given by $\mathcal{W}_p$ to a maximum of one-dimensional measurements in an effort to overcome the curse of dimensionality. In this note we derive concentration results and upper bounds on the expectation of $\overline{\mathcal{W}}_p$ between the true and empirical measure on unbounded reproducing kernel Hilbert spaces. We show that, under quite generic assumptions, probability measures concentrate uniformly fast in one-dimensional subspaces, at (nearly) parametric rates. Our results rely on an improvement of currently known bounds for $\overline{\mathcal{W}}_p$ in the finite-dimensional case.
Related papers
- Kernel Density Estimators in Large Dimensions [9.299356601085586]
We study the kernel-based estimate of the density $hatrho_hmathcal D(x)=frac1n hdsum_i=1n Kleft(fracx-y_ihright)$, depending on the bandwidth $h$.
We show that the optimal bandwidth threshold based on Kullback-Leibler divergence lies in the new statistical regime identified in this paper.
arXiv Detail & Related papers (2024-08-11T15:56:44Z) - Statistical and Computational Guarantees of Kernel Max-Sliced Wasserstein Distances [9.608373793625107]
kernel max-sliced (KMS) Wasserstein distance is developed for this purpose.
We show that computing the KMS $2$-Wasserstein distance is NP-hard.
arXiv Detail & Related papers (2024-05-24T11:14:56Z) - Learning with Norm Constrained, Over-parameterized, Two-layer Neural Networks [54.177130905659155]
Recent studies show that a reproducing kernel Hilbert space (RKHS) is not a suitable space to model functions by neural networks.
In this paper, we study a suitable function space for over- parameterized two-layer neural networks with bounded norms.
arXiv Detail & Related papers (2024-04-29T15:04:07Z) - Properties of Discrete Sliced Wasserstein Losses [11.280151521887076]
The Sliced Wasserstein (SW) distance has become a popular alternative to the Wasserstein distance for comparing probability measures.
Widespread applications include image processing, domain adaptation and generative modelling, where it is common to optimise some parameters in order to minimise SW.
We investigate the regularity and optimisation properties of this energy, as well as its Monte-Carlo approximation $mathcalE_p$.
arXiv Detail & Related papers (2023-07-19T21:21:18Z) - Effective Minkowski Dimension of Deep Nonparametric Regression: Function
Approximation and Statistical Theories [70.90012822736988]
Existing theories on deep nonparametric regression have shown that when the input data lie on a low-dimensional manifold, deep neural networks can adapt to intrinsic data structures.
This paper introduces a relaxed assumption that input data are concentrated around a subset of $mathbbRd$ denoted by $mathcalS$, and the intrinsic dimension $mathcalS$ can be characterized by a new complexity notation -- effective Minkowski dimension.
arXiv Detail & Related papers (2023-06-26T17:13:31Z) - Pseudonorm Approachability and Applications to Regret Minimization [73.54127663296906]
We convert high-dimensional $ell_infty$-approachability problems to low-dimensional pseudonorm approachability problems.
We develop an algorithmic theory of pseudonorm approachability, analogous to previous work on approachability for $ell$ and other norms.
arXiv Detail & Related papers (2023-02-03T03:19:14Z) - Computationally Efficient Horizon-Free Reinforcement Learning for Linear
Mixture MDPs [111.75736569611159]
We propose the first computationally efficient horizon-free algorithm for linear mixture MDPs.
Our algorithm adapts a weighted least square estimator for the unknown transitional dynamic.
This also improves upon the best-known algorithms in this setting when $sigma_k2$'s are known.
arXiv Detail & Related papers (2022-05-23T17:59:18Z) - Minimax Optimal Quantization of Linear Models: Information-Theoretic
Limits and Efficient Algorithms [59.724977092582535]
We consider the problem of quantizing a linear model learned from measurements.
We derive an information-theoretic lower bound for the minimax risk under this setting.
We show that our method and upper-bounds can be extended for two-layer ReLU neural networks.
arXiv Detail & Related papers (2022-02-23T02:39:04Z) - Dimensionality Reduction for Wasserstein Barycenter [6.327655795051619]
We study dimensionality reduction techniques for the Wasserstein barycenter problem.
We show that randomized dimensionality reduction can be used to map the problem to a space of dimension $O(log n)$ independent of both $d$ and $k$.
We also provide a coreset construction for the Wasserstein barycenter problem that significantly decreases the number of input distributions.
arXiv Detail & Related papers (2021-10-18T02:57:25Z) - Non-Parametric Estimation of Manifolds from Noisy Data [1.0152838128195467]
We consider the problem of estimating a $d$ dimensional sub-manifold of $mathbbRD$ from a finite set of noisy samples.
We show that the estimation yields rates of convergence of $n-frack2k + d$ for the point estimation and $n-frack-12k + d$ for the estimation of tangent space.
arXiv Detail & Related papers (2021-05-11T02:29:33Z) - Continuous Wasserstein-2 Barycenter Estimation without Minimax
Optimization [94.18714844247766]
Wasserstein barycenters provide a geometric notion of the weighted average of probability measures based on optimal transport.
We present a scalable algorithm to compute Wasserstein-2 barycenters given sample access to the input measures.
arXiv Detail & Related papers (2021-02-02T21:01:13Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.