Learning to Approximate Particle Smoothing Trajectories via Diffusion Generative Models
- URL: http://arxiv.org/abs/2406.00561v1
- Date: Sat, 1 Jun 2024 21:54:01 GMT
- Title: Learning to Approximate Particle Smoothing Trajectories via Diffusion Generative Models
- Authors: Ella Tamir, Arno Solin,
- Abstract summary: Learning systems from sparse observations is critical in numerous fields, including biology, finance, and physics.
We introduce a method that integrates conditional particle filtering with ancestral sampling and diffusion models.
We demonstrate the approach in time-series generation and tasks, including vehicle tracking and single-cell RNA sequencing data.
- Score: 16.196738720721417
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Learning dynamical systems from sparse observations is critical in numerous fields, including biology, finance, and physics. Even if tackling such problems is standard in general information fusion, it remains challenging for contemporary machine learning models, such as diffusion models. We introduce a method that integrates conditional particle filtering with ancestral sampling and diffusion models, enabling the generation of realistic trajectories that align with observed data. Our approach uses a smoother based on iterating a conditional particle filter with ancestral sampling to first generate plausible trajectories matching observed marginals, and learns the corresponding diffusion model. This approach provides both a generative method for high-quality, smoothed trajectories under complex constraints, and an efficient approximation of the particle smoothing distribution for classical tracking problems. We demonstrate the approach in time-series generation and interpolation tasks, including vehicle tracking and single-cell RNA sequencing data.
Related papers
- Learning state and proposal dynamics in state-space models using differentiable particle filters and neural networks [25.103069515802538]
We introduce a new method, StateMixNN, that uses a pair of neural networks to learn the proposal distribution and transition distribution of a particle filter.
Our method is trained targeting the log-likelihood, thereby requiring only the observation series.
The proposed method significantly improves recovery of the hidden state in comparison with the state-of-the-art, showing greater improvement in highly non-linear scenarios.
arXiv Detail & Related papers (2024-11-23T19:30:56Z) - Stochastic Reconstruction of Gappy Lagrangian Turbulent Signals by Conditional Diffusion Models [1.7810134788247751]
We present a method for reconstructing missing spatial and velocity data along the trajectories of small objects passively advected by turbulent flows.
Our approach makes use of conditional generative diffusion models, a recently proposed data-driven machine learning technique.
arXiv Detail & Related papers (2024-10-31T14:26:10Z) - Understanding Reinforcement Learning-Based Fine-Tuning of Diffusion Models: A Tutorial and Review [63.31328039424469]
This tutorial provides a comprehensive survey of methods for fine-tuning diffusion models to optimize downstream reward functions.
We explain the application of various RL algorithms, including PPO, differentiable optimization, reward-weighted MLE, value-weighted sampling, and path consistency learning.
arXiv Detail & Related papers (2024-07-18T17:35:32Z) - Flow Map Matching [15.520853806024943]
Flow map matching is an algorithm that learns the two-time flow map of an underlying ordinary differential equation.
We show that flow map matching leads to high-quality samples with significantly reduced sampling cost compared to diffusion or interpolant methods.
arXiv Detail & Related papers (2024-06-11T17:41:26Z) - On the Trajectory Regularity of ODE-based Diffusion Sampling [79.17334230868693]
Diffusion-based generative models use differential equations to establish a smooth connection between a complex data distribution and a tractable prior distribution.
In this paper, we identify several intriguing trajectory properties in the ODE-based sampling process of diffusion models.
arXiv Detail & Related papers (2024-05-18T15:59:41Z) - Towards Theoretical Understandings of Self-Consuming Generative Models [56.84592466204185]
This paper tackles the emerging challenge of training generative models within a self-consuming loop.
We construct a theoretical framework to rigorously evaluate how this training procedure impacts the data distributions learned by future models.
We present results for kernel density estimation, delivering nuanced insights such as the impact of mixed data training on error propagation.
arXiv Detail & Related papers (2024-02-19T02:08:09Z) - Gramian Angular Fields for leveraging pretrained computer vision models
with anomalous diffusion trajectories [0.9012198585960443]
We present a new data-driven method for working with diffusive trajectories.
This method utilizes Gramian Angular Fields (GAF) to encode one-dimensional trajectories as images.
We leverage two well-established pre-trained computer-vision models, ResNet and MobileNet, to characterize the underlying diffusive regime.
arXiv Detail & Related papers (2023-09-02T17:22:45Z) - A Geometric Perspective on Diffusion Models [57.27857591493788]
We inspect the ODE-based sampling of a popular variance-exploding SDE.
We establish a theoretical relationship between the optimal ODE-based sampling and the classic mean-shift (mode-seeking) algorithm.
arXiv Detail & Related papers (2023-05-31T15:33:16Z) - Reflected Diffusion Models [93.26107023470979]
We present Reflected Diffusion Models, which reverse a reflected differential equation evolving on the support of the data.
Our approach learns the score function through a generalized score matching loss and extends key components of standard diffusion models.
arXiv Detail & Related papers (2023-04-10T17:54:38Z) - Particle-Based Score Estimation for State Space Model Learning in
Autonomous Driving [62.053071723903834]
Multi-object state estimation is a fundamental problem for robotic applications.
We consider learning maximum-likelihood parameters using particle methods.
We apply our method to real data collected from autonomous vehicles.
arXiv Detail & Related papers (2022-12-14T01:21:05Z) - Particle clustering in turbulence: Prediction of spatial and statistical
properties with deep learning [6.91821181311687]
We simulate the dynamics of particles in the Epstein drag regime within a periodic domain of isotropic forced hydrodynamic turbulence.
We train a U-Net deep learning model to predict gridded representations of the particle density and velocity fields, given as input the corresponding fluid fields.
Our results suggest that, given appropriately expanded training data, deep learning could complement direct numerical simulations in predicting particle clustering within turbulent flows.
arXiv Detail & Related papers (2022-10-05T15:46:28Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.