Exploring Latent Pathways: Enhancing the Interpretability of Autonomous Driving with a Variational Autoencoder
- URL: http://arxiv.org/abs/2404.01750v1
- Date: Tue, 2 Apr 2024 09:05:47 GMT
- Title: Exploring Latent Pathways: Enhancing the Interpretability of Autonomous Driving with a Variational Autoencoder
- Authors: Anass Bairouk, Mirjana Maras, Simon Herlin, Alexander Amini, Marc Blanchon, Ramin Hasani, Patrick Chareyre, Daniela Rus,
- Abstract summary: A bio-inspired neural circuit policy model has emerged as an innovative control module.
We take a leap forward by integrating a variational autoencoder with the neural circuit policy controller.
In addition to the architectural shift toward a variational autoencoder, this study introduces the automatic latent perturbation tool.
- Score: 79.70947339175572
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Autonomous driving presents a complex challenge, which is usually addressed with artificial intelligence models that are end-to-end or modular in nature. Within the landscape of modular approaches, a bio-inspired neural circuit policy model has emerged as an innovative control module, offering a compact and inherently interpretable system to infer a steering wheel command from abstract visual features. Here, we take a leap forward by integrating a variational autoencoder with the neural circuit policy controller, forming a solution that directly generates steering commands from input camera images. By substituting the traditional convolutional neural network approach to feature extraction with a variational autoencoder, we enhance the system's interpretability, enabling a more transparent and understandable decision-making process. In addition to the architectural shift toward a variational autoencoder, this study introduces the automatic latent perturbation tool, a novel contribution designed to probe and elucidate the latent features within the variational autoencoder. The automatic latent perturbation tool automates the interpretability process, offering granular insights into how specific latent variables influence the overall model's behavior. Through a series of numerical experiments, we demonstrate the interpretative power of the variational autoencoder-neural circuit policy model and the utility of the automatic latent perturbation tool in making the inner workings of autonomous driving systems more transparent.
Related papers
- Learning from Pattern Completion: Self-supervised Controllable Generation [31.694486524155593]
We propose a self-supervised controllable generation (SCG) framework, inspired by the neural mechanisms that may contribute to the brain's associative power.
Experimental results demonstrate that the proposed modular autoencoder effectively achieves functional specialization.
Our proposed approach not only demonstrates superior robustness in more challenging high-noise scenarios but also possesses more promising scalability potential due to its self-supervised manner.
arXiv Detail & Related papers (2024-09-27T12:28:47Z) - Reason2Drive: Towards Interpretable and Chain-based Reasoning for Autonomous Driving [38.28159034562901]
Reason2Drive is a benchmark dataset with over 600K video-text pairs.
We characterize the autonomous driving process as a sequential combination of perception, prediction, and reasoning steps.
We introduce a novel aggregated evaluation metric to assess chain-based reasoning performance in autonomous systems.
arXiv Detail & Related papers (2023-12-06T18:32:33Z) - Drive Anywhere: Generalizable End-to-end Autonomous Driving with
Multi-modal Foundation Models [114.69732301904419]
We present an approach to apply end-to-end open-set (any environment/scene) autonomous driving that is capable of providing driving decisions from representations queryable by image and text.
Our approach demonstrates unparalleled results in diverse tests while achieving significantly greater robustness in out-of-distribution situations.
arXiv Detail & Related papers (2023-10-26T17:56:35Z) - Variational Autoencoding Neural Operators [17.812064311297117]
Unsupervised learning with functional data is an emerging paradigm of machine learning research with applications to computer vision, climate modeling and physical systems.
We present Variational Autoencoding Neural Operators (VANO), a general strategy for making a large class of operator learning architectures act as variational autoencoders.
arXiv Detail & Related papers (2023-02-20T22:34:43Z) - On the Forward Invariance of Neural ODEs [92.07281135902922]
We propose a new method to ensure neural ordinary differential equations (ODEs) satisfy output specifications.
Our approach uses a class of control barrier functions to transform output specifications into constraints on the parameters and inputs of the learning system.
arXiv Detail & Related papers (2022-10-10T15:18:28Z) - Adaptation through prediction: multisensory active inference torque
control [0.0]
We present a novel multisensory active inference torque controller for industrial arms.
Our controller, inspired by the predictive brain hypothesis, improves the capabilities of current active inference approaches.
arXiv Detail & Related papers (2021-12-13T16:03:18Z) - Bidirectional Interaction between Visual and Motor Generative Models
using Predictive Coding and Active Inference [68.8204255655161]
We propose a neural architecture comprising a generative model for sensory prediction, and a distinct generative model for motor trajectories.
We highlight how sequences of sensory predictions can act as rails guiding learning, control and online adaptation of motor trajectories.
arXiv Detail & Related papers (2021-04-19T09:41:31Z) - A Driving Behavior Recognition Model with Bi-LSTM and Multi-Scale CNN [59.57221522897815]
We propose a neural network model based on trajectories information for driving behavior recognition.
We evaluate the proposed model on the public BLVD dataset, achieving a satisfying performance.
arXiv Detail & Related papers (2021-03-01T06:47:29Z) - AutoBayes: Automated Bayesian Graph Exploration for Nuisance-Robust
Inference [21.707911452679152]
We introduce an automated Bayesian inference framework, called AutoBayes, to optimize nuisance-invariant machine learning pipelines.
We demonstrate a significant performance improvement with ensemble learning across explored graphical models.
arXiv Detail & Related papers (2020-07-02T17:06:26Z) - Improve Variational Autoencoder for Text Generationwith Discrete Latent
Bottleneck [52.08901549360262]
Variational autoencoders (VAEs) are essential tools in end-to-end representation learning.
VAEs tend to ignore latent variables with a strong auto-regressive decoder.
We propose a principled approach to enforce an implicit latent feature matching in a more compact latent space.
arXiv Detail & Related papers (2020-04-22T14:41:37Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.