Fast Risk Assessment for Autonomous Vehicles Using Learned Models of
Agent Futures
- URL: http://arxiv.org/abs/2005.13458v2
- Date: Wed, 3 Jun 2020 23:56:09 GMT
- Title: Fast Risk Assessment for Autonomous Vehicles Using Learned Models of
Agent Futures
- Authors: Allen Wang, Xin Huang, Ashkan Jasour, and Brian Williams
- Abstract summary: This paper presents fast non-sampling based methods to assess the risk of trajectories for autonomous vehicles.
The presented methods address a wide range of representations for uncertain predictions including both Gaussian and non-Gaussian mixture models.
The presented methods are demonstrated on realistic predictions from propagates trained on the Argoverse and CARLA datasets.
- Score: 10.358493658420173
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: This paper presents fast non-sampling based methods to assess the risk of
trajectories for autonomous vehicles when probabilistic predictions of other
agents' futures are generated by deep neural networks (DNNs). The presented
methods address a wide range of representations for uncertain predictions
including both Gaussian and non-Gaussian mixture models for predictions of both
agent positions and controls. We show that the problem of risk assessment when
Gaussian mixture models (GMMs) of agent positions are learned can be solved
rapidly to arbitrary levels of accuracy with existing numerical methods. To
address the problem of risk assessment for non-Gaussian mixture models of agent
position, we propose finding upper bounds on risk using Chebyshev's Inequality
and sums-of-squares (SOS) programming; they are both of interest as the former
is much faster while the latter can be arbitrarily tight. These approaches only
require statistical moments of agent positions to determine upper bounds on
risk. To perform risk assessment when models are learned for agent controls as
opposed to positions, we develop TreeRing, an algorithm analogous to tree
search over the ring of polynomials that can be used to exactly propagate
moments of control distributions into position distributions through nonlinear
dynamics. The presented methods are demonstrated on realistic predictions from
DNNs trained on the Argoverse and CARLA datasets and are shown to be effective
for rapidly assessing the probability of low probability events.
Related papers
- MAP-Former: Multi-Agent-Pair Gaussian Joint Prediction [6.110153599741102]
There is a gap in risk assessment of trajectories between the trajectory information coming from a traffic motion prediction module and what is actually needed.
Existing prediction models yield joint predictions of agents' future trajectories with uncertainty weights or marginal Gaussian probability density functions (PDFs) for single agents.
This paper introduces a novel approach to motion prediction, focusing on predicting agent-pair covariance matrices in a scene-centric'' manner.
arXiv Detail & Related papers (2024-04-30T06:21:42Z) - Distribution-free risk assessment of regression-based machine learning
algorithms [6.507711025292814]
We focus on regression algorithms and the risk-assessment task of computing the probability of the true label lying inside an interval defined around the model's prediction.
We solve the risk-assessment problem using the conformal prediction approach, which provides prediction intervals that are guaranteed to contain the true label with a given probability.
arXiv Detail & Related papers (2023-10-05T13:57:24Z) - Model Predictive Control with Gaussian-Process-Supported Dynamical
Constraints for Autonomous Vehicles [82.65261980827594]
We propose a model predictive control approach for autonomous vehicles that exploits learned Gaussian processes for predicting human driving behavior.
A multi-mode predictive control approach considers the possible intentions of the human drivers.
arXiv Detail & Related papers (2023-03-08T17:14:57Z) - Distributionally Robust Models with Parametric Likelihood Ratios [123.05074253513935]
Three simple ideas allow us to train models with DRO using a broader class of parametric likelihood ratios.
We find that models trained with the resulting parametric adversaries are consistently more robust to subpopulation shifts when compared to other DRO approaches.
arXiv Detail & Related papers (2022-04-13T12:43:12Z) - Fast nonlinear risk assessment for autonomous vehicles using learned
conditional probabilistic models of agent futures [19.247932561037487]
This paper presents fast non-sampling based methods to assess the risk for trajectories of autonomous vehicles.
The presented methods address a wide range of representations for uncertain predictions including both Gaussian and non-Gaussian mixture models.
We construct deterministic linear dynamical systems that govern the exact time evolution of the moments of uncertain position.
arXiv Detail & Related papers (2021-09-21T05:55:39Z) - Robust Out-of-Distribution Detection on Deep Probabilistic Generative
Models [0.06372261626436676]
Out-of-distribution (OOD) detection is an important task in machine learning systems.
Deep probabilistic generative models facilitate OOD detection by estimating the likelihood of a data sample.
We propose a new detection metric that operates without outlier exposure.
arXiv Detail & Related papers (2021-06-15T06:36:10Z) - Quantifying Uncertainty in Deep Spatiotemporal Forecasting [67.77102283276409]
We describe two types of forecasting problems: regular grid-based and graph-based.
We analyze UQ methods from both the Bayesian and the frequentist point view, casting in a unified framework via statistical decision theory.
Through extensive experiments on real-world road network traffic, epidemics, and air quality forecasting tasks, we reveal the statistical computational trade-offs for different UQ methods.
arXiv Detail & Related papers (2021-05-25T14:35:46Z) - Heterogeneous-Agent Trajectory Forecasting Incorporating Class
Uncertainty [54.88405167739227]
We present HAICU, a method for heterogeneous-agent trajectory forecasting that explicitly incorporates agents' class probabilities.
We additionally present PUP, a new challenging real-world autonomous driving dataset.
We demonstrate that incorporating class probabilities in trajectory forecasting significantly improves performance in the face of uncertainty.
arXiv Detail & Related papers (2021-04-26T10:28:34Z) - Trust but Verify: Assigning Prediction Credibility by Counterfactual
Constrained Learning [123.3472310767721]
Prediction credibility measures are fundamental in statistics and machine learning.
These measures should account for the wide variety of models used in practice.
The framework developed in this work expresses the credibility as a risk-fit trade-off.
arXiv Detail & Related papers (2020-11-24T19:52:38Z) - DeepHazard: neural network for time-varying risks [0.6091702876917281]
We propose a new flexible method for survival prediction: DeepHazard, a neural network for time-varying risks.
Our approach is tailored for a wide range of continuous hazards forms, with the only restriction of being additive in time.
Numerical examples illustrate that our approach outperforms existing state-of-the-art methodology in terms of predictive capability evaluated through the C-index metric.
arXiv Detail & Related papers (2020-07-26T21:01:49Z) - Can Autonomous Vehicles Identify, Recover From, and Adapt to
Distribution Shifts? [104.04999499189402]
Out-of-training-distribution (OOD) scenarios are a common challenge of learning agents at deployment.
We propose an uncertainty-aware planning method, called emphrobust imitative planning (RIP)
Our method can detect and recover from some distribution shifts, reducing the overconfident and catastrophic extrapolations in OOD scenes.
We introduce an autonomous car novel-scene benchmark, textttCARNOVEL, to evaluate the robustness of driving agents to a suite of tasks with distribution shifts.
arXiv Detail & Related papers (2020-06-26T11:07:32Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.