Deep Momentum Multi-Marginal Schr\"odinger Bridge
- URL: http://arxiv.org/abs/2303.01751v3
- Date: Thu, 5 Oct 2023 16:03:49 GMT
- Title: Deep Momentum Multi-Marginal Schr\"odinger Bridge
- Authors: Tianrong Chen, Guan-Horng Liu, Molei Tao, Evangelos A. Theodorou
- Abstract summary: We present a novel framework that learns the smooth measure-valued algorithm for systems that satisfy position marginal constraints across time.
Our algorithm outperforms baselines significantly, as evidenced by experiments for synthetic datasets and a real-world single-cell RNA dataset sequence.
- Score: 41.27274841596343
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: It is a crucial challenge to reconstruct population dynamics using unlabeled
samples from distributions at coarse time intervals. Recent approaches such as
flow-based models or Schr\"odinger Bridge (SB) models have demonstrated
appealing performance, yet the inferred sample trajectories either fail to
account for the underlying stochasticity or are $\underline{D}$eep
$\underline{M}$omentum Multi-Marginal $\underline{S}$chr\"odinger
$\underline{B}$ridge(DMSB), a novel computational framework that learns the
smooth measure-valued spline for stochastic systems that satisfy position
marginal constraints across time. By tailoring the celebrated Bregman Iteration
and extending the Iteration Proportional Fitting to phase space, we manage to
handle high-dimensional multi-marginal trajectory inference tasks efficiently.
Our algorithm outperforms baselines significantly, as evidenced by experiments
for synthetic datasets and a real-world single-cell RNA sequence dataset.
Additionally, the proposed approach can reasonably reconstruct the evolution of
velocity distribution, from position snapshots only, when there is a ground
truth velocity that is nevertheless inaccessible.
Related papers
- A Specialized Semismooth Newton Method for Kernel-Based Optimal
Transport [92.96250725599958]
Kernel-based optimal transport (OT) estimators offer an alternative, functional estimation procedure to address OT problems from samples.
We show that our SSN method achieves a global convergence rate of $O (1/sqrtk)$, and a local quadratic convergence rate under standard regularity conditions.
arXiv Detail & Related papers (2023-10-21T18:48:45Z) - Generative modeling for time series via Schr{\"o}dinger bridge [0.0]
We propose a novel generative model for time series based on Schr"dinger bridge (SB) approach.
This consists in the entropic via optimal transport between a reference probability measure on path space and a target measure consistent with the joint data distribution of the time series.
arXiv Detail & Related papers (2023-04-11T09:45:06Z) - Bayesian Pseudo-Coresets via Contrastive Divergence [5.479797073162603]
We introduce a novel approach for constructing pseudo-coresets by utilizing contrastive divergence.
It eliminates the need for approximations in the pseudo-coreset construction process.
We conduct extensive experiments on multiple datasets, demonstrating its superiority over existing BPC techniques.
arXiv Detail & Related papers (2023-03-20T17:13:50Z) - Jump-Diffusion Langevin Dynamics for Multimodal Posterior Sampling [3.4483987421251516]
We investigate the performance of a hybrid Metropolis and Langevin sampling method akin to Jump Diffusion on a range of synthetic and real data.
We find that careful calibration of mixing sampling jumps with gradient based chains significantly outperforms both pure gradient-based or sampling based schemes.
arXiv Detail & Related papers (2022-11-02T17:35:04Z) - Settling the Sample Complexity of Model-Based Offline Reinforcement
Learning [50.5790774201146]
offline reinforcement learning (RL) learns using pre-collected data without further exploration.
Prior algorithms or analyses either suffer from suboptimal sample complexities or incur high burn-in cost to reach sample optimality.
We demonstrate that the model-based (or "plug-in") approach achieves minimax-optimal sample complexity without burn-in cost.
arXiv Detail & Related papers (2022-04-11T17:26:19Z) - A fast asynchronous MCMC sampler for sparse Bayesian inference [10.535140830570256]
We propose a very fast approximate Markov Chain Monte Carlo (MCMC) sampling framework that is applicable to a large class of sparse Bayesian inference problems.
We show that in high-dimensional linear regression problems, the Markov chain generated by the proposed algorithm admits an invariant distribution that recovers correctly the main signal.
arXiv Detail & Related papers (2021-08-14T02:20:49Z) - Sample-Efficient Reinforcement Learning Is Feasible for Linearly
Realizable MDPs with Limited Revisiting [60.98700344526674]
Low-complexity models such as linear function representation play a pivotal role in enabling sample-efficient reinforcement learning.
In this paper, we investigate a new sampling protocol, which draws samples in an online/exploratory fashion but allows one to backtrack and revisit previous states in a controlled and infrequent manner.
We develop an algorithm tailored to this setting, achieving a sample complexity that scales practicallyly with the feature dimension, the horizon, and the inverse sub-optimality gap, but not the size of the state/action space.
arXiv Detail & Related papers (2021-05-17T17:22:07Z) - High-Dimensional Sparse Linear Bandits [67.9378546011416]
We derive a novel $Omega(n2/3)$ dimension-free minimax regret lower bound for sparse linear bandits in the data-poor regime.
We also prove a dimension-free $O(sqrtn)$ regret upper bound under an additional assumption on the magnitude of the signal for relevant features.
arXiv Detail & Related papers (2020-11-08T16:48:11Z) - Nearly Dimension-Independent Sparse Linear Bandit over Small Action
Spaces via Best Subset Selection [71.9765117768556]
We consider the contextual bandit problem under the high dimensional linear model.
This setting finds essential applications such as personalized recommendation, online advertisement, and personalized medicine.
We propose doubly growing epochs and estimating the parameter using the best subset selection method.
arXiv Detail & Related papers (2020-09-04T04:10:39Z) - Gravitational-wave parameter estimation with autoregressive neural
network flows [0.0]
We introduce the use of autoregressive normalizing flows for rapid likelihood-free inference of binary black hole system parameters from gravitational-wave data with deep neural networks.
A normalizing flow is an invertible mapping on a sample space that can be used to induce a transformation from a simple probability distribution to a more complex one.
We build a more powerful latent variable model by incorporating autoregressive flows within the variational autoencoder framework.
arXiv Detail & Related papers (2020-02-18T15:44:04Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.