Variational Autoencoder-Based Vehicle Trajectory Prediction with an
Interpretable Latent Space
- URL: http://arxiv.org/abs/2103.13726v1
- Date: Thu, 25 Mar 2021 10:15:53 GMT
- Title: Variational Autoencoder-Based Vehicle Trajectory Prediction with an
Interpretable Latent Space
- Authors: Marion Neumeier, Andreas Tollk\"uhn, Thomas Berberich and Michael
Botsch
- Abstract summary: This paper introduces the Descriptive Variational Autoencoder (DVAE), an unsupervised and end-to-end trainable neural network for predicting vehicle trajectories.
The proposed model provides a similar prediction accuracy but with the great advantage of having an interpretable latent space.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: This paper introduces the Descriptive Variational Autoencoder (DVAE), an
unsupervised and end-to-end trainable neural network for predicting vehicle
trajectories that provides partial interpretability. The novel approach is
based on the architecture and objective of common variational autoencoders. By
introducing expert knowledge within the decoder part of the autoencoder, the
encoder learns to extract latent parameters that provide a graspable meaning in
human terms. Such an interpretable latent space enables the validation by
expert defined rule sets. The evaluation of the DVAE is performed using the
publicly available highD dataset for highway traffic scenarios. In comparison
to a conventional variational autoencoder with equivalent complexity, the
proposed model provides a similar prediction accuracy but with the great
advantage of having an interpretable latent space. For crucial decision making
and assessing trustworthiness of a prediction this property is highly
desirable.
Related papers
- Interpret the Internal States of Recommendation Model with Sparse Autoencoder [26.021277330699963]
RecSAE is an automatic, generalizable probing method for interpreting the internal states of Recommendation models.
We train an autoencoder with sparsity constraints to reconstruct internal activations of recommendation models.
We automated the construction of concept dictionaries based on the relationship between latent activations and input item sequences.
arXiv Detail & Related papers (2024-11-09T08:22:31Z) - Traj-Explainer: An Explainable and Robust Multi-modal Trajectory Prediction Approach [12.60529039445456]
Navigating complex traffic environments has been significantly enhanced by advancements in intelligent technologies, enabling accurate environment perception and trajectory prediction for automated vehicles.
Existing research often neglects the consideration of the joint reasoning of scenario agents and lacks interpretability in trajectory prediction models.
An explainability-oriented trajectory prediction model is designed in this work, named Explainable Diffusion Conditional based Multimodal Trajectory Prediction Traj-Explainer.
arXiv Detail & Related papers (2024-10-22T08:17:33Z) - Probabilistic Prediction of Longitudinal Trajectory Considering Driving
Heterogeneity with Interpretability [12.929047288003213]
This study proposes a trajectory prediction framework that combines Mixture Density Networks (MDN) and considers the driving heterogeneity to provide probabilistic and personalized predictions.
The proposed framework is tested based on a wide-range vehicle trajectory dataset.
arXiv Detail & Related papers (2023-12-19T12:56:56Z) - Interpretable Spectral Variational AutoEncoder (ISVAE) for time series
clustering [48.0650332513417]
We introduce a novel model that incorporates an interpretable bottleneck-termed the Filter Bank (FB)-at the outset of a Variational Autoencoder (VAE)
This arrangement compels the VAE to attend on the most informative segments of the input signal.
By deliberately constraining the VAE with this FB, we promote the development of an encoding that is discernible, separable, and of reduced dimensionality.
arXiv Detail & Related papers (2023-10-18T13:06:05Z) - Disentanglement via Latent Quantization [60.37109712033694]
In this work, we construct an inductive bias towards encoding to and decoding from an organized latent space.
We demonstrate the broad applicability of this approach by adding it to both basic data-re (vanilla autoencoder) and latent-reconstructing (InfoGAN) generative models.
arXiv Detail & Related papers (2023-05-28T06:30:29Z) - Think Twice before Driving: Towards Scalable Decoders for End-to-End
Autonomous Driving [74.28510044056706]
Existing methods usually adopt the decoupled encoder-decoder paradigm.
In this work, we aim to alleviate the problem by two principles.
We first predict a coarse-grained future position and action based on the encoder features.
Then, conditioned on the position and action, the future scene is imagined to check the ramification if we drive accordingly.
arXiv Detail & Related papers (2023-05-10T15:22:02Z) - Hierarchical Variational Autoencoder for Visual Counterfactuals [79.86967775454316]
Conditional Variational Autos (VAE) are gathering significant attention as an Explainable Artificial Intelligence (XAI) tool.
In this paper we show how relaxing the effect of the posterior leads to successful counterfactuals.
We introduce VAEX an Hierarchical VAE designed for this approach that can visually audit a classifier in applications.
arXiv Detail & Related papers (2021-02-01T14:07:11Z) - Autoencoding Variational Autoencoder [56.05008520271406]
We study the implications of this behaviour on the learned representations and also the consequences of fixing it by introducing a notion of self consistency.
We show that encoders trained with our self-consistency approach lead to representations that are robust (insensitive) to perturbations in the input introduced by adversarial attacks.
arXiv Detail & Related papers (2020-12-07T14:16:14Z) - On the Encoder-Decoder Incompatibility in Variational Text Modeling and
Beyond [82.18770740564642]
Variational autoencoders (VAEs) combine latent variables with amortized variational inference.
We observe the encoder-decoder incompatibility that leads to poor parameterizations of the data manifold.
We propose Coupled-VAE, which couples a VAE model with a deterministic autoencoder with the same structure.
arXiv Detail & Related papers (2020-04-20T10:34:10Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.