Transfer Learning Study of Motion Transformer-based Trajectory Predictions
- URL: http://arxiv.org/abs/2404.08271v3
- Date: Wed, 7 Aug 2024 08:00:43 GMT
- Title: Transfer Learning Study of Motion Transformer-based Trajectory Predictions
- Authors: Lars Ullrich, Alex McMaster, Knut Graichen,
- Abstract summary: Trajectory planning in autonomous driving is highly dependent on predicting the emergent behavior of other road users.
Learning-based methods are currently showing impressive results in simulation-based challenges.
The study aims to provide insights into possible trade-offs between computational time and performance to support effective transfers into the real world.
- Score: 1.2972104025246092
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Trajectory planning in autonomous driving is highly dependent on predicting the emergent behavior of other road users. Learning-based methods are currently showing impressive results in simulation-based challenges, with transformer-based architectures technologically leading the way. Ultimately, however, predictions are needed in the real world. In addition to the shifts from simulation to the real world, many vehicle- and country-specific shifts, i.e. differences in sensor systems, fusion and perception algorithms as well as traffic rules and laws, are on the agenda. Since models that can cover all system setups and design domains at once are not yet foreseeable, model adaptation plays a central role. Therefore, a simulation-based study on transfer learning techniques is conducted on basis of a transformer-based model. Furthermore, the study aims to provide insights into possible trade-offs between computational time and performance to support effective transfers into the real world.
Related papers
- Gaussian Splatting to Real World Flight Navigation Transfer with Liquid Networks [93.38375271826202]
We present a method to improve generalization and robustness to distribution shifts in sim-to-real visual quadrotor navigation tasks.
We first build a simulator by integrating Gaussian splatting with quadrotor flight dynamics, and then, train robust navigation policies using Liquid neural networks.
In this way, we obtain a full-stack imitation learning protocol that combines advances in 3D Gaussian splatting radiance field rendering, programming of expert demonstration training data, and the task understanding capabilities of Liquid networks.
arXiv Detail & Related papers (2024-06-21T13:48:37Z) - Prompt to Transfer: Sim-to-Real Transfer for Traffic Signal Control with
Prompt Learning [4.195122359359966]
Large Language Models (LLMs) are trained on mass knowledge and proved to be equipped with astonishing inference abilities.
In this work, we leverage LLMs to understand and profile the system dynamics by a prompt-based grounded action transformation.
arXiv Detail & Related papers (2023-08-28T03:49:13Z) - TransWorldNG: Traffic Simulation via Foundation Model [23.16553424318004]
We present TransWordNG, a traffic simulator that uses Data-driven algorithms and Graph Computing techniques to learn traffic dynamics from real data.
The results demonstrate that TransWorldNG can generate more realistic traffic patterns compared to traditional simulators.
arXiv Detail & Related papers (2023-05-25T05:49:30Z) - TrafficBots: Towards World Models for Autonomous Driving Simulation and
Motion Prediction [149.5716746789134]
We show data-driven traffic simulation can be formulated as a world model.
We present TrafficBots, a multi-agent policy built upon motion prediction and end-to-end driving.
Experiments on the open motion dataset show TrafficBots can simulate realistic multi-agent behaviors.
arXiv Detail & Related papers (2023-03-07T18:28:41Z) - Tackling Real-World Autonomous Driving using Deep Reinforcement Learning [63.3756530844707]
In this work, we propose a model-free Deep Reinforcement Learning Planner training a neural network that predicts acceleration and steering angle.
In order to deploy the system on board the real self-driving car, we also develop a module represented by a tiny neural network.
arXiv Detail & Related papers (2022-07-05T16:33:20Z) - Learning Interactive Driving Policies via Data-driven Simulation [125.97811179463542]
Data-driven simulators promise high data-efficiency for driving policy learning.
Small underlying datasets often lack interesting and challenging edge cases for learning interactive driving.
We propose a simulation method that uses in-painted ado vehicles for learning robust driving policies.
arXiv Detail & Related papers (2021-11-23T20:14:02Z) - Objective-aware Traffic Simulation via Inverse Reinforcement Learning [31.26257563160961]
We formulate traffic simulation as an inverse reinforcement learning problem.
We propose a parameter sharing adversarial inverse reinforcement learning model for dynamics-robust simulation learning.
Our proposed model is able to imitate a vehicle's trajectories in the real world while simultaneously recovering the reward function.
arXiv Detail & Related papers (2021-05-20T07:26:34Z) - Learning to Simulate on Sparse Trajectory Data [26.718807213824853]
We present a novel framework ImInGAIL to address the problem of learning to simulate the driving behavior from sparse real-world data.
To the best of our knowledge, we are the first to tackle the data sparsity issue for behavior learning problems.
arXiv Detail & Related papers (2021-03-22T13:42:11Z) - Improving Generalization of Transfer Learning Across Domains Using
Spatio-Temporal Features in Autonomous Driving [45.655433907239804]
Vehicle simulation can be used to learn in the virtual world, and the acquired skills can be transferred to handle real-world scenarios.
These visual elements are intuitively crucial for human decision making during driving.
We propose a CNN+LSTM transfer learning framework to extract thetemporal-temporal features representing vehicle dynamics from scenes.
arXiv Detail & Related papers (2021-03-15T03:26:06Z) - Point Cloud Based Reinforcement Learning for Sim-to-Real and Partial
Observability in Visual Navigation [62.22058066456076]
Reinforcement Learning (RL) represents powerful tools to solve complex robotic tasks.
RL does not work directly in the real-world, which is known as the sim-to-real transfer problem.
We propose a method that learns on an observation space constructed by point clouds and environment randomization.
arXiv Detail & Related papers (2020-07-27T17:46:59Z) - From Simulation to Real World Maneuver Execution using Deep
Reinforcement Learning [69.23334811890919]
Deep Reinforcement Learning has proved to be able to solve many control tasks in different fields, but the behavior of these systems is not always as expected when deployed in real-world scenarios.
This is mainly due to the lack of domain adaptation between simulated and real-world data together with the absence of distinction between train and test datasets.
We present a system based on multiple environments in which agents are trained simultaneously, evaluating the behavior of the model in different scenarios.
arXiv Detail & Related papers (2020-05-13T14:22:20Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.