Personalized Trajectory Prediction via Distribution Discrimination
- URL: http://arxiv.org/abs/2107.14204v1
- Date: Thu, 29 Jul 2021 17:42:12 GMT
- Title: Personalized Trajectory Prediction via Distribution Discrimination
- Authors: Guangyi Chen, Junlong Li, Nuoxing Zhou, Liangliang Ren, Jiwen Lu
- Abstract summary: Trarimiy prediction is confronted with the dilemma to capture the multi-modal nature of future dynamics.
We present a distribution discrimination (DisDis) method to predict personalized motion patterns.
Our method can be integrated with existing multi-modal predictive models as a plug-and-play module.
- Score: 78.69458579657189
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Trajectory prediction is confronted with the dilemma to capture the
multi-modal nature of future dynamics with both diversity and accuracy. In this
paper, we present a distribution discrimination (DisDis) method to predict
personalized motion patterns by distinguishing the potential distributions.
Motivated by that the motion pattern of each person is personalized due to
his/her habit, our DisDis learns the latent distribution to represent different
motion patterns and optimize it by the contrastive discrimination. This
distribution discrimination encourages latent distributions to be more
discriminative. Our method can be integrated with existing multi-modal
stochastic predictive models as a plug-and-play module to learn the more
discriminative latent distribution. To evaluate the latent distribution, we
further propose a new metric, probability cumulative minimum distance (PCMD)
curve, which cumulatively calculates the minimum distance on the sorted
probabilities. Experimental results on the ETH and UCY datasets show the
effectiveness of our method.
Related papers
- Collaborative Heterogeneous Causal Inference Beyond Meta-analysis [68.4474531911361]
We propose a collaborative inverse propensity score estimator for causal inference with heterogeneous data.
Our method shows significant improvements over the methods based on meta-analysis when heterogeneity increases.
arXiv Detail & Related papers (2024-04-24T09:04:36Z) - Training Implicit Generative Models via an Invariant Statistical Loss [3.139474253994318]
Implicit generative models have the capability to learn arbitrary complex data distributions.
On the downside, training requires telling apart real data from artificially-generated ones using adversarial discriminators.
We develop a discriminator-free method for training one-dimensional (1D) generative implicit models.
arXiv Detail & Related papers (2024-02-26T09:32:28Z) - Statistical Inference Under Constrained Selection Bias [20.862583584531322]
We propose a framework that enables statistical inference in the presence of selection bias.
The output is high-probability bounds on the value of an estimand for the target distribution.
We analyze the computational and statistical properties of methods to estimate these bounds and show that our method can produce informative bounds on a variety of simulated and semisynthetic tasks.
arXiv Detail & Related papers (2023-06-05T23:05:26Z) - Learning and Predicting Multimodal Vehicle Action Distributions in a
Unified Probabilistic Model Without Labels [26.303522885475406]
We present a unified probabilistic model that learns a representative set of discrete vehicle actions and predicts the probability of each action given a particular scenario.
Our model also enables us to estimate the distribution over continuous trajectories conditioned on a scenario, representing what each discrete action would look like if executed in that scenario.
arXiv Detail & Related papers (2022-12-14T04:01:19Z) - Predicting with Confidence on Unseen Distributions [90.68414180153897]
We connect domain adaptation and predictive uncertainty literature to predict model accuracy on challenging unseen distributions.
We find that the difference of confidences (DoC) of a classifier's predictions successfully estimates the classifier's performance change over a variety of shifts.
We specifically investigate the distinction between synthetic and natural distribution shifts and observe that despite its simplicity DoC consistently outperforms other quantifications of distributional difference.
arXiv Detail & Related papers (2021-07-07T15:50:18Z) - Distributed NLI: Learning to Predict Human Opinion Distributions for
Language Reasoning [76.17436599516074]
We introduce distributed NLI, a new NLU task with a goal to predict the distribution of human judgements for natural language inference.
We show that models can capture human judgement distribution by applying additional distribution estimation methods, namely, Monte Carlo (MC) Dropout, Deep Ensemble, Re-Calibration, and Distribution Distillation.
arXiv Detail & Related papers (2021-04-18T01:25:19Z) - A Brief Introduction to Generative Models [8.031257560764336]
We introduce and motivate generative modeling as a central task for machine learning.
We outline the maximum likelihood approach and how it can be interpreted as minimizing KL-divergence.
We explore the alternative adversarial approach which involves studying the differences between an estimating distribution and a real data distribution.
arXiv Detail & Related papers (2021-02-27T16:49:41Z) - Out-of-distribution detection for regression tasks: parameter versus
predictor entropy [2.026281591452464]
It is crucial to detect when an instance lies downright too far from the training samples for the machine learning model to be trusted.
For neural networks, one approach to this task consists of learning a diversity of predictors that all can explain the training data.
We propose a new way of estimating the entropy of a distribution on predictors based on nearest neighbors in function space.
arXiv Detail & Related papers (2020-10-24T21:41:21Z) - Distributional Reinforcement Learning via Moment Matching [54.16108052278444]
We formulate a method that learns a finite set of statistics from each return distribution via neural networks.
Our method can be interpreted as implicitly matching all orders of moments between a return distribution and its Bellman target.
Experiments on the suite of Atari games show that our method outperforms the standard distributional RL baselines.
arXiv Detail & Related papers (2020-07-24T05:18:17Z) - Decision-Making with Auto-Encoding Variational Bayes [71.44735417472043]
We show that a posterior approximation distinct from the variational distribution should be used for making decisions.
Motivated by these theoretical results, we propose learning several approximate proposals for the best model.
In addition to toy examples, we present a full-fledged case study of single-cell RNA sequencing.
arXiv Detail & Related papers (2020-02-17T19:23:36Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.