Generalizability Analysis of Graph-based Trajectory Predictor with
Vectorized Representation
- URL: http://arxiv.org/abs/2208.03578v1
- Date: Sat, 6 Aug 2022 20:19:52 GMT
- Title: Generalizability Analysis of Graph-based Trajectory Predictor with
Vectorized Representation
- Authors: Juanwu Lu, Wei Zhan, Masayoshi Tomizuka, Yeping Hu
- Abstract summary: Trajectory prediction is one of the essential tasks for autonomous vehicles.
Recent progress in machine learning gave birth to a series of advanced trajectory prediction algorithms.
- Score: 29.623692599892365
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Trajectory prediction is one of the essential tasks for autonomous vehicles.
Recent progress in machine learning gave birth to a series of advanced
trajectory prediction algorithms. Lately, the effectiveness of using graph
neural networks (GNNs) with vectorized representations for trajectory
prediction has been demonstrated by many researchers. Nonetheless, these
algorithms either pay little attention to models' generalizability across
various scenarios or simply assume training and test data follow similar
statistics. In fact, when test scenarios are unseen or Out-of-Distribution
(OOD), the resulting train-test domain shift usually leads to significant
degradation in prediction performance, which will impact downstream modules and
eventually lead to severe accidents. Therefore, it is of great importance to
thoroughly investigate the prediction models in terms of their
generalizability, which can not only help identify their weaknesses but also
provide insights on how to improve these models. This paper proposes a
generalizability analysis framework using feature attribution methods to help
interpret black-box models. For the case study, we provide an in-depth
generalizability analysis of one of the state-of-the-art graph-based trajectory
predictors that utilize vectorized representation. Results show significant
performance degradation due to domain shift, and feature attribution provides
insights to identify potential causes of these problems. Finally, we conclude
the common prediction challenges and how weighting biases induced by the
training process can deteriorate the accuracy.
Related papers
- Critical Example Mining for Vehicle Trajectory Prediction using Flow-based Generative Models [10.40439055916036]
This paper proposes a data-driven approach to estimate the rareness of the trajectories.
By combining the rareness estimation of observations with whole trajectories, the proposed method effectively identifies a subset of data that is relatively hard to predict.
arXiv Detail & Related papers (2024-10-21T15:02:30Z) - Towards Generalizable and Interpretable Motion Prediction: A Deep
Variational Bayes Approach [54.429396802848224]
This paper proposes an interpretable generative model for motion prediction with robust generalizability to out-of-distribution cases.
For interpretability, the model achieves the target-driven motion prediction by estimating the spatial distribution of long-term destinations.
Experiments on motion prediction datasets validate that the fitted model can be interpretable and generalizable.
arXiv Detail & Related papers (2024-03-10T04:16:04Z) - Generalizing Backpropagation for Gradient-Based Interpretability [103.2998254573497]
We show that the gradient of a model is a special case of a more general formulation using semirings.
This observation allows us to generalize the backpropagation algorithm to efficiently compute other interpretable statistics.
arXiv Detail & Related papers (2023-07-06T15:19:53Z) - Modeling Uncertain Feature Representation for Domain Generalization [49.129544670700525]
We show that our method consistently improves the network generalization ability on multiple vision tasks.
Our methods are simple yet effective and can be readily integrated into networks without additional trainable parameters or loss constraints.
arXiv Detail & Related papers (2023-01-16T14:25:02Z) - Pathologies of Pre-trained Language Models in Few-shot Fine-tuning [50.3686606679048]
We show that pre-trained language models with few examples show strong prediction bias across labels.
Although few-shot fine-tuning can mitigate the prediction bias, our analysis shows models gain performance improvement by capturing non-task-related features.
These observations alert that pursuing model performance with fewer examples may incur pathological prediction behavior.
arXiv Detail & Related papers (2022-04-17T15:55:18Z) - Interpretable and Generalizable Graph Learning via Stochastic Attention
Mechanism [6.289180873978089]
Interpretable graph learning is in need as many scientific applications depend on learning models to collect insights from graph-structured data.
Previous works mostly focused on using post-hoc approaches to interpret a pre-trained model.
We propose Graph Attention (GSAT), an attention mechanism derived from the information bottleneck principle.
arXiv Detail & Related papers (2022-01-31T03:59:48Z) - Discovering Invariant Rationales for Graph Neural Networks [104.61908788639052]
Intrinsic interpretability of graph neural networks (GNNs) is to find a small subset of the input graph's features.
We propose a new strategy of discovering invariant rationale (DIR) to construct intrinsically interpretable GNNs.
arXiv Detail & Related papers (2022-01-30T16:43:40Z) - Deconfounding to Explanation Evaluation in Graph Neural Networks [136.73451468551656]
We argue that a distribution shift exists between the full graph and the subgraph, causing the out-of-distribution problem.
We propose Deconfounded Subgraph Evaluation (DSE) which assesses the causal effect of an explanatory subgraph on the model prediction.
arXiv Detail & Related papers (2022-01-21T18:05:00Z) - A comprehensive study on the prediction reliability of graph neural
networks for virtual screening [0.0]
We investigate the effects of model architectures, regularization methods, and loss functions on the prediction performance and reliability of classification results.
Our result highlights that correct choice of regularization and inference methods is evidently important to achieve high success rate.
arXiv Detail & Related papers (2020-03-17T10:13:31Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.