Through the Looking Glass: Mirror Schrödinger Bridges
- URL: http://arxiv.org/abs/2410.07003v1
- Date: Wed, 9 Oct 2024 15:48:56 GMT
- Title: Through the Looking Glass: Mirror Schrödinger Bridges
- Authors: Leticia Mattos Da Silva, Silvia Sellán, Justin Solomon,
- Abstract summary: Resampling from a target measure whose density is unknown is a fundamental problem in mathematical statistics and machine learning.
We propose a new model for conditional resampling called mirror Schr"odinger bridges.
- Score: 16.05211717546091
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Resampling from a target measure whose density is unknown is a fundamental problem in mathematical statistics and machine learning. A setting that dominates the machine learning literature consists of learning a map from an easy-to-sample prior, such as the Gaussian distribution, to a target measure. Under this model, samples from the prior are pushed forward to generate a new sample on the target measure, which is often difficult to sample from directly. In this paper, we propose a new model for conditional resampling called mirror Schr\"odinger bridges. Our key observation is that solving the Schr\"odinger bridge problem between a distribution and itself provides a natural way to produce new samples from conditional distributions, giving in-distribution variations of an input data point. We show how to efficiently solve this largely overlooked version of the Schr\"odinger bridge problem. We prove that our proposed method leads to significant algorithmic simplifications over existing alternatives, in addition to providing control over in-distribution variation. Empirically, we demonstrate how these benefits can be leveraged to produce proximal samples in a number of application domains.
Related papers
- Theory on Score-Mismatched Diffusion Models and Zero-Shot Conditional Samplers [49.97755400231656]
We present the first performance guarantee with explicit dimensional general score-mismatched diffusion samplers.
We show that score mismatches result in an distributional bias between the target and sampling distributions, proportional to the accumulated mismatch between the target and training distributions.
This result can be directly applied to zero-shot conditional samplers for any conditional model, irrespective of measurement noise.
arXiv Detail & Related papers (2024-10-17T16:42:12Z) - Localized Schrödinger Bridge Sampler [0.276240219662896]
We consider the generative problem of sampling from an unknown distribution for which only a sufficiently large number of training samples are available.
A key bottleneck of this approach is the exponential dependence of the required training samples on the dimension, $d$, of the ambient state space.
We propose a localization strategy which exploits conditional independence of conditional expectation values.
arXiv Detail & Related papers (2024-09-12T12:02:51Z) - An Improved Algorithm for Learning Drifting Discrete Distributions [2.2191203337341525]
We present a new adaptive algorithm for learning discrete distributions under distribution drift.
We observe a sequence of independent samples from a discrete distribution that is changing over time, and the goal is to estimate the current distribution.
To use more samples, we must resort to samples further in the past, and we incur a drift error due to the bias introduced by the change in distribution.
We present a novel adaptive algorithm that can solve this trade-off without any prior knowledge of the drift.
arXiv Detail & Related papers (2024-03-08T16:54:27Z) - Soft-constrained Schrodinger Bridge: a Stochastic Control Approach [4.922305511803267]
Schr"odinger bridge can be viewed as a continuous-time control problem where the goal is to find an optimally controlled diffusion process.
We propose to generalize this problem by allowing the terminal distribution to differ from the target but penalizing the Kullback-Leibler divergence between the two distributions.
One application is the development of robust generative diffusion models.
arXiv Detail & Related papers (2024-03-04T04:10:24Z) - Generalized Schrödinger Bridge Matching [54.171931505066]
Generalized Schr"odinger Bridge (GSB) problem setup is prevalent in many scientific areas both within and without machine learning.
We propose Generalized Schr"odinger Bridge Matching (GSBM), a new matching algorithm inspired by recent advances.
We show that such a generalization can be cast as solving conditional optimal control, for which variational approximations can be used.
arXiv Detail & Related papers (2023-10-03T17:42:11Z) - Efficient Multimodal Sampling via Tempered Distribution Flow [11.36635610546803]
We develop a new type of transport-based sampling method called TemperFlow.
Various experiments demonstrate the superior performance of this novel sampler compared to traditional methods.
We show its applications in modern deep learning tasks such as image generation.
arXiv Detail & Related papers (2023-04-08T06:40:06Z) - Bias Mimicking: A Simple Sampling Approach for Bias Mitigation [57.17709477668213]
We introduce a new class-conditioned sampling method: Bias Mimicking.
Bias Mimicking improves underrepresented groups' accuracy of sampling methods by 3% over four benchmarks.
arXiv Detail & Related papers (2022-09-30T17:33:00Z) - Saliency Grafting: Innocuous Attribution-Guided Mixup with Calibrated
Label Mixing [104.630875328668]
Mixup scheme suggests mixing a pair of samples to create an augmented training sample.
We present a novel, yet simple Mixup-variant that captures the best of both worlds.
arXiv Detail & Related papers (2021-12-16T11:27:48Z) - Sampling from Arbitrary Functions via PSD Models [55.41644538483948]
We take a two-step approach by first modeling the probability distribution and then sampling from that model.
We show that these models can approximate a large class of densities concisely using few evaluations, and present a simple algorithm to effectively sample from these models.
arXiv Detail & Related papers (2021-10-20T12:25:22Z) - Unrolling Particles: Unsupervised Learning of Sampling Distributions [102.72972137287728]
Particle filtering is used to compute good nonlinear estimates of complex systems.
We show in simulations that the resulting particle filter yields good estimates in a wide range of scenarios.
arXiv Detail & Related papers (2021-10-06T16:58:34Z) - Deep Generative Learning via Schr\"{o}dinger Bridge [14.138796631423954]
We learn a generative model via entropy with a Schr"odinger Bridge.
We show that the generative model via Schr"odinger Bridge is comparable with state-of-the-art GANs.
arXiv Detail & Related papers (2021-06-19T03:35:42Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.