Sampling for Model Predictive Trajectory Planning in Autonomous Driving using Normalizing Flows
- URL: http://arxiv.org/abs/2404.09657v3
- Date: Wed, 7 Aug 2024 13:44:01 GMT
- Title: Sampling for Model Predictive Trajectory Planning in Autonomous Driving using Normalizing Flows
- Authors: Georg Rabenstein, Lars Ullrich, Knut Graichen,
- Abstract summary: This paper investigates several sampling approaches for trajectory generation.
normalizing flows originating from the field of variational inference are considered.
Learning-based normalizing flow models are trained for a more efficient exploration of the input domain.
- Score: 1.2972104025246092
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Alongside optimization-based planners, sampling-based approaches are often used in trajectory planning for autonomous driving due to their simplicity. Model predictive path integral control is a framework that builds upon optimization principles while incorporating stochastic sampling of input trajectories. This paper investigates several sampling approaches for trajectory generation. In this context, normalizing flows originating from the field of variational inference are considered for the generation of sampling distributions, as they model transformations of simple to more complex distributions. Accordingly, learning-based normalizing flow models are trained for a more efficient exploration of the input domain for the task at hand. The developed algorithm and the proposed sampling distributions are evaluated in two simulation scenarios.
Related papers
- Dynamical Measure Transport and Neural PDE Solvers for Sampling [77.38204731939273]
We tackle the task of sampling from a probability density as transporting a tractable density function to the target.
We employ physics-informed neural networks (PINNs) to approximate the respective partial differential equations (PDEs) solutions.
PINNs allow for simulation- and discretization-free optimization and can be trained very efficiently.
arXiv Detail & Related papers (2024-07-10T17:39:50Z) - On the Trajectory Regularity of ODE-based Diffusion Sampling [79.17334230868693]
Diffusion-based generative models use differential equations to establish a smooth connection between a complex data distribution and a tractable prior distribution.
In this paper, we identify several intriguing trajectory properties in the ODE-based sampling process of diffusion models.
arXiv Detail & Related papers (2024-05-18T15:59:41Z) - Sliced Wasserstein with Random-Path Projecting Directions [49.802024788196434]
We propose an optimization-free slicing distribution that provides a fast sampling for the Monte Carlo estimation of expectation.
We derive the random-path slicing distribution (RPSD) and two variants of sliced Wasserstein, i.e., the Random-Path Projection Sliced Wasserstein (RPSW) and the Importance Weighted Random-Path Projection Sliced Wasserstein (IWRPSW)
arXiv Detail & Related papers (2024-01-29T04:59:30Z) - Distributed Markov Chain Monte Carlo Sampling based on the Alternating
Direction Method of Multipliers [143.6249073384419]
In this paper, we propose a distributed sampling scheme based on the alternating direction method of multipliers.
We provide both theoretical guarantees of our algorithm's convergence and experimental evidence of its superiority to the state-of-the-art.
In simulation, we deploy our algorithm on linear and logistic regression tasks and illustrate its fast convergence compared to existing gradient-based methods.
arXiv Detail & Related papers (2024-01-29T02:08:40Z) - Diffusion Generative Flow Samplers: Improving learning signals through
partial trajectory optimization [87.21285093582446]
Diffusion Generative Flow Samplers (DGFS) is a sampling-based framework where the learning process can be tractably broken down into short partial trajectory segments.
Our method takes inspiration from the theory developed for generative flow networks (GFlowNets)
arXiv Detail & Related papers (2023-10-04T09:39:05Z) - Trajectory-oriented optimization of stochastic epidemiological models [0.873811641236639]
Epidemiological models must be calibrated to ground truth for downstream tasks.
We propose a class of Gaussian process (GP) surrogates along with an optimization strategy based on Thompson sampling.
This Trajectory Oriented Optimization (TOO) approach produces actual trajectories close to the empirical observations.
arXiv Detail & Related papers (2023-05-06T04:45:49Z) - Unsupervised Sampling Promoting for Stochastic Human Trajectory
Prediction [10.717921532244613]
We propose a novel method, called BOsampler, to adaptively mine potential paths with Bayesian optimization in an unsupervised manner.
Specifically, we model the trajectory sampling as a Gaussian process and construct an acquisition function to measure the potential sampling value.
This acquisition function applies the original distribution as prior and encourages exploring paths in the long-tail region.
arXiv Detail & Related papers (2023-04-09T19:15:14Z) - Unrolling Particles: Unsupervised Learning of Sampling Distributions [102.72972137287728]
Particle filtering is used to compute good nonlinear estimates of complex systems.
We show in simulations that the resulting particle filter yields good estimates in a wide range of scenarios.
arXiv Detail & Related papers (2021-10-06T16:58:34Z) - Parallelised Diffeomorphic Sampling-based Motion Planning [30.310891362316863]
We propose Parallelised Diffeomorphic Sampling-based Motion Planning (PDMP)
PDMP transforms sampling distributions of sampling-based motion planners, in a manner akin to normalising flows.
PDMP is able to leverage gradient information of costs, to inject specifications, in a manner similar to optimisation-based motion planning methods.
arXiv Detail & Related papers (2021-08-26T13:15:11Z) - Diverse Sampling for Normalizing Flow Based Trajectory Forecasting [34.01303881881315]
We propose Diversity Sampling for Flow (DSF) to improve the quality and diversity of trajectory samples from a pre-trained flow model.
DSF is easy to implement, and we show that it offers a simple plug-in improvement for several existing flow-based forecasting models.
arXiv Detail & Related papers (2020-11-30T18:23:29Z) - Learning to Plan Optimally with Flow-based Motion Planner [29.124322674133]
We introduce a conditional normalising flow based distribution learned through previous experiences to improve sampling of these methods.
Our distribution can be conditioned on the current problem instance to provide an informative prior for sampling configurations within promising regions.
By using our normalising flow based distribution, a solution can be found faster, with less samples and better overall runtime performance.
arXiv Detail & Related papers (2020-10-21T21:46:08Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.