Drivetrain simulation using variational autoencoders
- URL: http://arxiv.org/abs/2501.17653v2
- Date: Sun, 29 Jun 2025 14:26:51 GMT
- Title: Drivetrain simulation using variational autoencoders
- Authors: Pallavi Sharma, Jorge-Humberto Urrea-Quintero, Bogdan Bogdan, Adrian-Dumitru Ciotec, Laura Vasilie, Henning Wessels, Matteo Skull,
- Abstract summary: This work proposes variational autoencoders (VAEs) to predict a vehicle's jerk signals from torque demand.<n>We implement both unconditional and conditional VAEs, trained on experimental data from two variants of a fully electric SUV.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: This work proposes variational autoencoders (VAEs) to predict a vehicle's jerk signals from torque demand in the context of limited real-world drivetrain datasets. We implement both unconditional and conditional VAEs, trained on experimental data from two variants of a fully electric SUV with differing torque and drivetrain configurations. The VAEs synthesize jerk signals that capture characteristics from multiple drivetrain scenarios by leveraging the learned latent space. A performance comparison with baseline physics-based and hybrid models confirms the effectiveness of the VAEs, without requiring detailed system parametrization. Unconditional VAEs generate realistic jerk signals without prior system knowledge, while conditional VAEs enable the generation of signals tailored to specific torque inputs. This approach reduces the dependence on costly and time-intensive real-world experiments and extensive manual modeling. The results support the integration of generative models such as VAEs into drivetrain simulation pipelines, both for data augmentation and for efficient exploration of complex operational scenarios, with the potential to streamline validation and accelerate vehicle development.
Related papers
- InfGen: Scenario Generation as Next Token Group Prediction [49.54222089551598]
InfGen is a scenario generation framework that outputs agent states and trajectories in an autoregressive manner.<n>Experiments demonstrate that InfGen produces realistic, diverse, and adaptive traffic behaviors.
arXiv Detail & Related papers (2025-06-29T16:18:32Z) - Unraveling the Effects of Synthetic Data on End-to-End Autonomous Driving [35.49042205415498]
We introduce SceneCrafter, a realistic, interactive, and efficient autonomous driving simulator based on 3D Gaussian Splatting (3DGS)<n>SceneCrafter efficiently generates realistic driving logs across diverse traffic scenarios.<n>It also enables robust closed-loop evaluation of end-to-end models.
arXiv Detail & Related papers (2025-03-23T15:27:43Z) - Gradient-based Trajectory Optimization with Parallelized Differentiable Traffic Simulation [24.95575815501035]
We present a parallelized differentiable traffic simulator based on the Intelligent Driver Model (IDM)<n>Our simulator efficiently models vehicle motion, generating trajectories that can be supervised to fit real-world data.<n>We show that we can use the simulator to filter noise in the input trajectories (trajectory filtering), reconstruct dense trajectories from sparse ones (trajectory reconstruction), and predict future trajectories.
arXiv Detail & Related papers (2024-12-21T19:53:38Z) - FollowGen: A Scaled Noise Conditional Diffusion Model for Car-Following Trajectory Prediction [9.2729178775419]
This study introduces a scaled noise conditional diffusion model for car-following trajectory prediction.
It integrates detailed inter-vehicular interactions and car-following dynamics into a generative framework, improving the accuracy and plausibility of predicted trajectories.
Experimental results on diverse real-world driving scenarios demonstrate the state-of-the-art performance and robustness of the proposed method.
arXiv Detail & Related papers (2024-11-23T23:13:45Z) - Crossfusor: A Cross-Attention Transformer Enhanced Conditional Diffusion Model for Car-Following Trajectory Prediction [10.814758830775727]
This study introduces a Cross-Attention Transformer Enhanced Diffusion Model (Crossfusor) specifically designed for car-following trajectory prediction.
It integrates detailed inter-vehicular interactions and car-following dynamics into a robust diffusion framework, improving both the accuracy and realism of predicted trajectories.
Experimental results on the NGSIM dataset demonstrate that Crossfusor outperforms state-of-the-art models, particularly in long-term predictions.
arXiv Detail & Related papers (2024-06-17T17:35:47Z) - An Approach to Systematic Data Acquisition and Data-Driven Simulation for the Safety Testing of Automated Driving Functions [32.37902846268263]
In R&D areas related to the safety impact of the "open world", there is a significant shortage of real-world data to parameterize and/or validate simulations.
We present an approach to systematically acquire data in public traffic by heterogeneous means, transform it into a unified representation, and use it to automatically parameterize traffic behavior models for use in data-driven virtual validation of automated driving functions.
arXiv Detail & Related papers (2024-05-02T23:24:27Z) - Purpose in the Machine: Do Traffic Simulators Produce Distributionally
Equivalent Outcomes for Reinforcement Learning Applications? [35.719833726363085]
This work focuses on two simulators commonly used to train reinforcement learning (RL) agents for traffic applications, CityFlow and SUMO.
A controlled virtual experiment varying driver behavior and simulation scale finds evidence against distributional equivalence in RL-relevant measures from these simulators.
While granular real-world validation generally remains infeasible, these findings suggest that traffic simulators are not a deus ex machina for RL training.
arXiv Detail & Related papers (2023-11-14T01:05:14Z) - Optimizing Non-Autoregressive Transformers with Contrastive Learning [74.46714706658517]
Non-autoregressive Transformers (NATs) reduce the inference latency of Autoregressive Transformers (ATs) by predicting words all at once rather than in sequential order.
In this paper, we propose to ease the difficulty of modality learning via sampling from the model distribution instead of the data distribution.
arXiv Detail & Related papers (2023-05-23T04:20:13Z) - Generative AI-empowered Simulation for Autonomous Driving in Vehicular
Mixed Reality Metaverses [130.15554653948897]
In vehicular mixed reality (MR) Metaverse, distance between physical and virtual entities can be overcome.
Large-scale traffic and driving simulation via realistic data collection and fusion from the physical world is difficult and costly.
We propose an autonomous driving architecture, where generative AI is leveraged to synthesize unlimited conditioned traffic and driving data in simulations.
arXiv Detail & Related papers (2023-02-16T16:54:10Z) - Recurrence Boosts Diversity! Revisiting Recurrent Latent Variable in
Transformer-Based Variational AutoEncoder for Diverse Text Generation [85.5379146125199]
Variational Auto-Encoder (VAE) has been widely adopted in text generation.
We propose TRACE, a Transformer-based recurrent VAE structure.
arXiv Detail & Related papers (2022-10-22T10:25:35Z) - Paraformer: Fast and Accurate Parallel Transformer for
Non-autoregressive End-to-End Speech Recognition [62.83832841523525]
We propose a fast and accurate parallel transformer, termed Paraformer.
It accurately predicts the number of output tokens and extract hidden variables.
It can attain comparable performance to the state-of-the-art AR transformer, with more than 10x speedup.
arXiv Detail & Related papers (2022-06-16T17:24:14Z) - COOPERNAUT: End-to-End Driving with Cooperative Perception for Networked
Vehicles [54.61668577827041]
We introduce COOPERNAUT, an end-to-end learning model that uses cross-vehicle perception for vision-based cooperative driving.
Our experiments on AutoCastSim suggest that our cooperative perception driving models lead to a 40% improvement in average success rate.
arXiv Detail & Related papers (2022-05-04T17:55:12Z) - Deep Transformer Networks for Time Series Classification: The NPP Safety
Case [59.20947681019466]
An advanced temporal neural network referred to as the Transformer is used within a supervised learning fashion to model the time-dependent NPP simulation data.
The Transformer can learn the characteristics of the sequential data and yield promising performance with approximately 99% classification accuracy on the testing dataset.
arXiv Detail & Related papers (2021-04-09T14:26:25Z) - Variational Autoencoder-Based Vehicle Trajectory Prediction with an
Interpretable Latent Space [0.0]
This paper introduces the Descriptive Variational Autoencoder (DVAE), an unsupervised and end-to-end trainable neural network for predicting vehicle trajectories.
The proposed model provides a similar prediction accuracy but with the great advantage of having an interpretable latent space.
arXiv Detail & Related papers (2021-03-25T10:15:53Z) - Generating and Characterizing Scenarios for Safety Testing of Autonomous
Vehicles [86.9067793493874]
We propose efficient mechanisms to characterize and generate testing scenarios using a state-of-the-art driving simulator.
We use our method to characterize real driving data from the Next Generation Simulation (NGSIM) project.
We rank the scenarios by defining metrics based on the complexity of avoiding accidents and provide insights into how the AV could have minimized the probability of incurring an accident.
arXiv Detail & Related papers (2021-03-12T17:00:23Z) - Multi-intersection Traffic Optimisation: A Benchmark Dataset and a
Strong Baseline [85.9210953301628]
Control of traffic signals is fundamental and critical to alleviate traffic congestion in urban areas.
Because of the high complexity of modelling the problem, experimental settings of current works are often inconsistent.
We propose a novel and strong baseline model based on deep reinforcement learning with the encoder-decoder structure.
arXiv Detail & Related papers (2021-01-24T03:55:39Z) - Autoencoding Variational Autoencoder [56.05008520271406]
We study the implications of this behaviour on the learned representations and also the consequences of fixing it by introducing a notion of self consistency.
We show that encoders trained with our self-consistency approach lead to representations that are robust (insensitive) to perturbations in the input introduced by adversarial attacks.
arXiv Detail & Related papers (2020-12-07T14:16:14Z) - Self-awareness in intelligent vehicles: Feature based dynamic Bayesian
models for abnormality detection [4.251384905163326]
This paper aims to introduce a novel method to develop self-awareness in autonomous vehicles.
Time-series data from the vehicles are used to develop the data-driven Dynamic Bayesian Network (DBN) models.
An initial level collective awareness model that can perform joint anomaly detection in co-operative tasks is proposed.
arXiv Detail & Related papers (2020-10-29T09:29:47Z) - Simple and Effective VAE Training with Calibrated Decoders [123.08908889310258]
Variational autoencoders (VAEs) provide an effective and simple method for modeling complex distributions.
We study the impact of calibrated decoders, which learn the uncertainty of the decoding distribution.
We propose a simple but novel modification to the commonly used Gaussian decoder, which computes the prediction variance analytically.
arXiv Detail & Related papers (2020-06-23T17:57:47Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.