On the potential benefits of entropic regularization for smoothing Wasserstein estimators
- URL: http://arxiv.org/abs/2210.06934v3
- Date: Mon, 28 Oct 2024 18:04:29 GMT
- Title: On the potential benefits of entropic regularization for smoothing Wasserstein estimators
- Authors: Jérémie Bigot, Paul Freulon, Boris P. Hejblum, Arthur Leclaire,
- Abstract summary: This paper focuses on the study of entropic regularization in optimal transport as a smoothing method for Wasserstein estimators.
We discuss how entropic regularization may reach, at a lower computational cost, statistical performances comparable to those of un-regularized Wasserstein estimators.
- Score: 3.194167428464958
- License:
- Abstract: This paper is focused on the study of entropic regularization in optimal transport as a smoothing method for Wasserstein estimators, through the prism of the classical tradeoff between approximation and estimation errors in statistics. Wasserstein estimators are defined as solutions of variational problems whose objective function involves the use of an optimal transport cost between probability measures. Such estimators can be regularized by replacing the optimal transport cost by its regularized version using an entropy penalty on the transport plan. The use of such a regularization has a potentially significant smoothing effect on the resulting estimators. In this work, we investigate its potential benefits on the approximation and estimation properties of regularized Wasserstein estimators. Our main contribution is to discuss how entropic regularization may reach, at a lower computational cost, statistical performances that are comparable to those of un-regularized Wasserstein estimators in statistical learning problems involving distributional data analysis. To this end, we present new theoretical results on the convergence of regularized Wasserstein estimators. We also study their numerical performances using simulated and real data in the supervised learning problem of proportions estimation in mixture models using optimal transport.
Related papers
- Distributionally Robust Instrumental Variables Estimation [10.765695227417865]
We propose a distributionally robust framework for instrumental variables (IV) estimation.
We show that Wasserstein DRIVE could be preferable in practice, particularly when the practitioner is uncertain about model assumptions or distributional shifts in data.
arXiv Detail & Related papers (2024-10-21T04:33:38Z) - Asymptotically Optimal Change Detection for Unnormalized Pre- and Post-Change Distributions [65.38208224389027]
This paper addresses the problem of detecting changes when only unnormalized pre- and post-change distributions are accessible.
Our approach is based on the estimation of the Cumulative Sum statistics, which is known to produce optimal performance.
arXiv Detail & Related papers (2024-10-18T17:13:29Z) - Statistical Efficiency of Score Matching: The View from Isoperimetry [96.65637602827942]
We show a tight connection between statistical efficiency of score matching and the isoperimetric properties of the distribution being estimated.
We formalize these results both in the sample regime and in the finite regime.
arXiv Detail & Related papers (2022-10-03T06:09:01Z) - Heavy-tailed Streaming Statistical Estimation [58.70341336199497]
We consider the task of heavy-tailed statistical estimation given streaming $p$ samples.
We design a clipped gradient descent and provide an improved analysis under a more nuanced condition on the noise of gradients.
arXiv Detail & Related papers (2021-08-25T21:30:27Z) - Rates of Estimation of Optimal Transport Maps using Plug-in Estimators
via Barycentric Projections [0.0]
We provide a comprehensive analysis of the rates of convergences for general plug-in estimators defined via barycentric projections.
Our main contribution is a new stability estimate for barycentric projections which proceeds under minimal smoothness assumptions.
We illustrate the usefulness of this stability estimate by first providing rates of convergence for the natural discrete-discrete and semi-discrete estimators of optimal transport maps.
arXiv Detail & Related papers (2021-07-04T19:50:20Z) - Continuous Wasserstein-2 Barycenter Estimation without Minimax
Optimization [94.18714844247766]
Wasserstein barycenters provide a geometric notion of the weighted average of probability measures based on optimal transport.
We present a scalable algorithm to compute Wasserstein-2 barycenters given sample access to the input measures.
arXiv Detail & Related papers (2021-02-02T21:01:13Z) - Robust W-GAN-Based Estimation Under Wasserstein Contamination [8.87135311567798]
We study several estimation problems under a Wasserstein contamination model and present computationally tractable estimators motivated by generative networks (GANs)
Specifically, we analyze properties of Wasserstein GAN-based estimators for adversarial location estimation, covariance matrix estimation, and linear regression.
Our proposed estimators are minimax optimal in many scenarios.
arXiv Detail & Related papers (2021-01-20T05:15:16Z) - On Projection Robust Optimal Transport: Sample Complexity and Model
Misspecification [101.0377583883137]
Projection robust (PR) OT seeks to maximize the OT cost between two measures by choosing a $k$-dimensional subspace onto which they can be projected.
Our first contribution is to establish several fundamental statistical properties of PR Wasserstein distances.
Next, we propose the integral PR Wasserstein (IPRW) distance as an alternative to the PRW distance, by averaging rather than optimizing on subspaces.
arXiv Detail & Related papers (2020-06-22T14:35:33Z) - Nonparametric inverse probability weighted estimators based on the
highly adaptive lasso [0.966840768820136]
Inparametric inverse probability weighted estimators are known to be inefficient and suffer from the curse of dimensionality.
We propose a class of nonparametric inverse probability weighted estimators in which the weighting mechanism is estimated via undersmoothing of the highly adaptive lasso.
Our developments have broad implications for the construction of efficient inverse probability weighted estimators in large statistical models and a variety of problem settings.
arXiv Detail & Related papers (2020-05-22T17:49:46Z) - SUMO: Unbiased Estimation of Log Marginal Probability for Latent
Variable Models [80.22609163316459]
We introduce an unbiased estimator of the log marginal likelihood and its gradients for latent variable models based on randomized truncation of infinite series.
We show that models trained using our estimator give better test-set likelihoods than a standard importance-sampling based approach for the same average computational cost.
arXiv Detail & Related papers (2020-04-01T11:49:30Z) - Statistical Optimal Transport posed as Learning Kernel Embedding [0.0]
This work takes the novel approach of posing statistical Optimal Transport (OT) as that of learning the transport plan's kernel mean embedding from sample based estimates of marginal embeddings.
A key result is that, under very mild conditions, $epsilon$-optimal recovery of the transport plan as well as the Barycentric-projection based transport map is possible with a sample complexity that is completely dimension-free.
arXiv Detail & Related papers (2020-02-08T14:58:53Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.