Trajectory Inference via Mean-field Langevin in Path Space
- URL: http://arxiv.org/abs/2205.07146v1
- Date: Sat, 14 May 2022 23:13:00 GMT
- Title: Trajectory Inference via Mean-field Langevin in Path Space
- Authors: Stephen Zhang, L\'ena\"ic Chizat, Matthieu Heitz, Geoffrey Schiebinger
- Abstract summary: Trajectory inference aims at recovering the dynamics of a population from snapshots of its temporal marginals.
A min-entropy estimator relative to the Wiener measure in path space was introduced by Lavenant et al.
- Score: 0.17205106391379024
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Trajectory inference aims at recovering the dynamics of a population from
snapshots of its temporal marginals. To solve this task, a min-entropy
estimator relative to the Wiener measure in path space was introduced by
Lavenant et al. arXiv:2102.09204, and shown to consistently recover the
dynamics of a large class of drift-diffusion processes from the solution of an
infinite dimensional convex optimization problem. In this paper, we introduce a
grid-free algorithm to compute this estimator. Our method consists in a family
of point clouds (one per snapshot) coupled via Schr\"odinger bridges which
evolve with noisy gradient descent. We study the mean-field limit of the
dynamics and prove its global convergence at an exponential rate to the desired
estimator. Overall, this leads to an inference method with end-to-end
theoretical guarantees that solves an interpretable model for trajectory
inference. We also present how to adapt the method to deal with mass
variations, a useful extension when dealing with single cell RNA-sequencing
data where cells can branch and die.
Related papers
- von Mises Quasi-Processes for Bayesian Circular Regression [57.88921637944379]
We explore a family of expressive and interpretable distributions over circle-valued random functions.
The resulting probability model has connections with continuous spin models in statistical physics.
For posterior inference, we introduce a new Stratonovich-like augmentation that lends itself to fast Markov Chain Monte Carlo sampling.
arXiv Detail & Related papers (2024-06-19T01:57:21Z) - Partially Observed Trajectory Inference using Optimal Transport and a Dynamics Prior [2.7255073299359154]
Trajectory inference seeks to recover the temporal dynamics of a population from snapshots of its temporal marginals.
We introduce PO-MFL to solve this latent trajectory inference problem.
We leverage the MFL framework of arXiv:2205.07146, yielding an algorithm based on entropic OT between dynamics-adjusted adjacent time marginals.
arXiv Detail & Related papers (2024-06-11T17:21:15Z) - Adaptive Federated Learning Over the Air [108.62635460744109]
We propose a federated version of adaptive gradient methods, particularly AdaGrad and Adam, within the framework of over-the-air model training.
Our analysis shows that the AdaGrad-based training algorithm converges to a stationary point at the rate of $mathcalO( ln(T) / T 1 - frac1alpha ).
arXiv Detail & Related papers (2024-03-11T09:10:37Z) - Symmetric Mean-field Langevin Dynamics for Distributional Minimax
Problems [78.96969465641024]
We extend mean-field Langevin dynamics to minimax optimization over probability distributions for the first time with symmetric and provably convergent updates.
We also study time and particle discretization regimes and prove a new uniform-in-time propagation of chaos result.
arXiv Detail & Related papers (2023-12-02T13:01:29Z) - Curvature-Independent Last-Iterate Convergence for Games on Riemannian
Manifolds [77.4346324549323]
We show that a step size agnostic to the curvature of the manifold achieves a curvature-independent and linear last-iterate convergence rate.
To the best of our knowledge, the possibility of curvature-independent rates and/or last-iterate convergence has not been considered before.
arXiv Detail & Related papers (2023-06-29T01:20:44Z) - Kernelized Diffusion maps [2.817412580574242]
In this article, we build a different estimator of the Laplacian, via a reproducing kernel Hilbert space method.
We provide non-asymptotic statistical rates proving that the kernel estimator we build can circumvent the curse of dimensionality.
arXiv Detail & Related papers (2023-02-13T23:54:36Z) - Manifold Interpolating Optimal-Transport Flows for Trajectory Inference [64.94020639760026]
We present a method called Manifold Interpolating Optimal-Transport Flow (MIOFlow)
MIOFlow learns, continuous population dynamics from static snapshot samples taken at sporadic timepoints.
We evaluate our method on simulated data with bifurcations and merges, as well as scRNA-seq data from embryoid body differentiation, and acute myeloid leukemia treatment.
arXiv Detail & Related papers (2022-06-29T22:19:03Z) - On the Benefits of Large Learning Rates for Kernel Methods [110.03020563291788]
We show that a phenomenon can be precisely characterized in the context of kernel methods.
We consider the minimization of a quadratic objective in a separable Hilbert space, and show that with early stopping, the choice of learning rate influences the spectral decomposition of the obtained solution.
arXiv Detail & Related papers (2022-02-28T13:01:04Z) - A blob method method for inhomogeneous diffusion with applications to
multi-agent control and sampling [0.6562256987706128]
We develop a deterministic particle method for the weighted porous medium equation (WPME) and prove its convergence on bounded time intervals.
Our method has natural applications to multi-agent coverage algorithms and sampling probability measures.
arXiv Detail & Related papers (2022-02-25T19:49:05Z) - Spatio-Temporal Variational Gaussian Processes [26.60276485130467]
We introduce a scalable approach to Gaussian process inference that combinestemporal-temporal filtering with natural variational inference.
We derive a sparse approximation that constructs a state-space model over a reduced set of inducing points.
We show that for separable Markov kernels the full sparse cases recover exactly the standard variational GP.
arXiv Detail & Related papers (2021-11-02T16:53:31Z) - Consistent Online Gaussian Process Regression Without the Sample
Complexity Bottleneck [14.309243378538012]
We propose an online compression scheme that fixes an error neighborhood with respect to the Hellinger metric centered at the current posterior.
For constant error radius, POG converges to a neighborhood of the population posterior (Theorem 1(ii))but with finite memory at-worst determined by the metric entropy of the feature space.
arXiv Detail & Related papers (2020-04-23T11:52:06Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.