A Relaxed Wasserstein Distance Formulation for Mixtures of Radially Contoured Distributions
- URL: http://arxiv.org/abs/2503.13893v1
- Date: Tue, 18 Mar 2025 04:39:31 GMT
- Title: A Relaxed Wasserstein Distance Formulation for Mixtures of Radially Contoured Distributions
- Authors: Keyu Chen, Zetian Wang, Yunxin Zhang,
- Abstract summary: We propose a simple relaxed Wasserstein distance for identifiable mixtures of radially contoured distributions.<n>We show some properties of this distance and that its definition does not require marginal consistency.
- Score: 5.876704494595038
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Recently, a Wasserstein-type distance for Gaussian mixture models has been proposed. However, that framework can only be generalized to identifiable mixtures of general elliptically contoured distributions whose components come from the same family and satisfy marginal consistency. In this paper, we propose a simple relaxed Wasserstein distance for identifiable mixtures of radially contoured distributions whose components can come from different families. We show some properties of this distance and that its definition does not require marginal consistency. We apply this distance in color transfer tasks and compare its performance with the Wasserstein-type distance for Gaussian mixture models in an experiment. The error of our method is more stable and the color distribution of our output image is more desirable.
Related papers
- Summarizing Bayesian Nonparametric Mixture Posterior -- Sliced Optimal Transport Metrics for Gaussian Mixtures [10.694077392690447]
Existing methods to summarize posterior inference for mixture models focus on identifying a point estimate of the implied random partition for clustering.<n>We propose a novel approach for summarizing posterior inference in nonparametric Bayesian mixture models, prioritizing density estimation of the mixing measure (or mixture) as an inference target.
arXiv Detail & Related papers (2024-11-22T02:15:38Z) - Theory on Score-Mismatched Diffusion Models and Zero-Shot Conditional Samplers [49.97755400231656]
We present the first performance guarantee with explicit dimensional general score-mismatched diffusion samplers.
We show that score mismatches result in an distributional bias between the target and sampling distributions, proportional to the accumulated mismatch between the target and training distributions.
This result can be directly applied to zero-shot conditional samplers for any conditional model, irrespective of measurement noise.
arXiv Detail & Related papers (2024-10-17T16:42:12Z) - Unraveling the Smoothness Properties of Diffusion Models: A Gaussian Mixture Perspective [18.331374727331077]
We provide a theoretical understanding of the Lipschitz continuity and second momentum properties of the diffusion process.
Our results provide deeper theoretical insights into the dynamics of the diffusion process under common data distributions.
arXiv Detail & Related papers (2024-05-26T03:32:27Z) - Unsupervised Learning of Sampling Distributions for Particle Filters [80.6716888175925]
We put forward four methods for learning sampling distributions from observed measurements.
Experiments demonstrate that learned sampling distributions exhibit better performance than designed, minimum-degeneracy sampling distributions.
arXiv Detail & Related papers (2023-02-02T15:50:21Z) - Estimation and inference for the Wasserstein distance between mixing measures in topic models [18.66039789963639]
The Wasserstein distance between mixing measures has come to occupy a central place in the statistical analysis of mixture models.
This work proposes a new canonical interpretation of this distance and provides tools to perform inference on the Wasserstein distance in topic models.
arXiv Detail & Related papers (2022-06-26T02:33:40Z) - A Robust and Flexible EM Algorithm for Mixtures of Elliptical
Distributions with Missing Data [71.9573352891936]
This paper tackles the problem of missing data imputation for noisy and non-Gaussian data.
A new EM algorithm is investigated for mixtures of elliptical distributions with the property of handling potential missing data.
Experimental results on synthetic data demonstrate that the proposed algorithm is robust to outliers and can be used with non-Gaussian data.
arXiv Detail & Related papers (2022-01-28T10:01:37Z) - Schema matching using Gaussian mixture models with Wasserstein distance [0.2676349883103403]
We derive approximations for the Wasserstein distance between Gaussian mixture models and reduce it to linear problem.
In this paper we derive one of possible approximations for the Wasserstein distance between Gaussian mixture models and reduce it to linear problem.
arXiv Detail & Related papers (2021-11-28T21:44:58Z) - Density Ratio Estimation via Infinitesimal Classification [85.08255198145304]
We propose DRE-infty, a divide-and-conquer approach to reduce Density ratio estimation (DRE) to a series of easier subproblems.
Inspired by Monte Carlo methods, we smoothly interpolate between the two distributions via an infinite continuum of intermediate bridge distributions.
We show that our approach performs well on downstream tasks such as mutual information estimation and energy-based modeling on complex, high-dimensional datasets.
arXiv Detail & Related papers (2021-11-22T06:26:29Z) - Large-Scale Wasserstein Gradient Flows [84.73670288608025]
We introduce a scalable scheme to approximate Wasserstein gradient flows.
Our approach relies on input neural networks (ICNNs) to discretize the JKO steps.
As a result, we can sample from the measure at each step of the gradient diffusion and compute its density.
arXiv Detail & Related papers (2021-06-01T19:21:48Z) - Uniform Convergence Rates for Maximum Likelihood Estimation under
Two-Component Gaussian Mixture Models [13.769786711365104]
We derive uniform convergence rates for the maximum likelihood estimator and minimax lower bounds for parameter estimation.
We assume the mixing proportions of the mixture are known and fixed, but make no separation assumption on the underlying mixture components.
arXiv Detail & Related papers (2020-06-01T04:13:48Z) - Distributed, partially collapsed MCMC for Bayesian Nonparametrics [68.5279360794418]
We exploit the fact that completely random measures, which commonly used models like the Dirichlet process and the beta-Bernoulli process can be expressed as, are decomposable into independent sub-measures.
We use this decomposition to partition the latent measure into a finite measure containing only instantiated components, and an infinite measure containing all other components.
The resulting hybrid algorithm can be applied to allow scalable inference without sacrificing convergence guarantees.
arXiv Detail & Related papers (2020-01-15T23:10:13Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.