Sense, Imagine, Act: Multimodal Perception Improves Model-Based
Reinforcement Learning for Head-to-Head Autonomous Racing
- URL: http://arxiv.org/abs/2305.04750v1
- Date: Mon, 8 May 2023 14:49:02 GMT
- Title: Sense, Imagine, Act: Multimodal Perception Improves Model-Based
Reinforcement Learning for Head-to-Head Autonomous Racing
- Authors: Elena Shrestha, Chetan Reddy, Hanxi Wan, Yulun Zhuang, and Ram
Vasudevan
- Abstract summary: Model-based reinforcement learning (MBRL) techniques have recently yielded promising results for real-world autonomous racing.
This paper proposes a self-supervised sensor fusion technique that combines egocentric LiDAR and RGB camera observations collected from the F1TENTH Gym.
The resulting Dreamer agent safely avoided collisions and won the most races compared to other tested baselines in zero-shot head-to-head autonomous racing.
- Score: 10.309579267966361
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Model-based reinforcement learning (MBRL) techniques have recently yielded
promising results for real-world autonomous racing using high-dimensional
observations. MBRL agents, such as Dreamer, solve long-horizon tasks by
building a world model and planning actions by latent imagination. This
approach involves explicitly learning a model of the system dynamics and using
it to learn the optimal policy for continuous control over multiple timesteps.
As a result, MBRL agents may converge to sub-optimal policies if the world
model is inaccurate. To improve state estimation for autonomous racing, this
paper proposes a self-supervised sensor fusion technique that combines
egocentric LiDAR and RGB camera observations collected from the F1TENTH Gym.
The zero-shot performance of MBRL agents is empirically evaluated on unseen
tracks and against a dynamic obstacle. This paper illustrates that multimodal
perception improves robustness of the world model without requiring additional
training data. The resulting multimodal Dreamer agent safely avoided collisions
and won the most races compared to other tested baselines in zero-shot
head-to-head autonomous racing.
Related papers
- Mitigating Covariate Shift in Imitation Learning for Autonomous Vehicles Using Latent Space Generative World Models [60.87795376541144]
A world model is a neural network capable of predicting an agent's next state given past states and actions.
During end-to-end training, our policy learns how to recover from errors by aligning with states observed in human demonstrations.
We present qualitative and quantitative results, demonstrating significant improvements upon prior state of the art in closed-loop testing.
arXiv Detail & Related papers (2024-09-25T06:48:25Z) - MetaFollower: Adaptable Personalized Autonomous Car Following [63.90050686330677]
We propose an adaptable personalized car-following framework - MetaFollower.
We first utilize Model-Agnostic Meta-Learning (MAML) to extract common driving knowledge from various CF events.
We additionally combine Long Short-Term Memory (LSTM) and Intelligent Driver Model (IDM) to reflect temporal heterogeneity with high interpretability.
arXiv Detail & Related papers (2024-06-23T15:30:40Z) - HarmonyDream: Task Harmonization Inside World Models [93.07314830304193]
Model-based reinforcement learning (MBRL) holds the promise of sample-efficient learning.
We propose a simple yet effective approach, HarmonyDream, which automatically adjusts loss coefficients to maintain task harmonization.
arXiv Detail & Related papers (2023-09-30T11:38:13Z) - Predictive Experience Replay for Continual Visual Control and
Forecasting [62.06183102362871]
We present a new continual learning approach for visual dynamics modeling and explore its efficacy in visual control and forecasting.
We first propose the mixture world model that learns task-specific dynamics priors with a mixture of Gaussians, and then introduce a new training strategy to overcome catastrophic forgetting.
Our model remarkably outperforms the naive combinations of existing continual learning and visual RL algorithms on DeepMind Control and Meta-World benchmarks with continual visual control tasks.
arXiv Detail & Related papers (2023-03-12T05:08:03Z) - Multitask Adaptation by Retrospective Exploration with Learned World
Models [77.34726150561087]
We propose a meta-learned addressing model called RAMa that provides training samples for the MBRL agent taken from task-agnostic storage.
The model is trained to maximize the expected agent's performance by selecting promising trajectories solving prior tasks from the storage.
arXiv Detail & Related papers (2021-10-25T20:02:57Z) - Formulation and validation of a car-following model based on deep
reinforcement learning [0.0]
We propose and validate a novel car following model based on deep reinforcement learning.
Our model is trained to maximize externally given reward functions for the free and car-following regimes.
The parameters of these reward functions resemble that of traditional models such as the Intelligent Driver Model.
arXiv Detail & Related papers (2021-09-29T08:27:12Z) - Vision-Based Autonomous Car Racing Using Deep Imitative Reinforcement
Learning [13.699336307578488]
Deep imitative reinforcement learning approach (DIRL) achieves agile autonomous racing using visual inputs.
We validate our algorithm both in a high-fidelity driving simulation and on a real-world 1/20-scale RC-car with limited onboard computation.
arXiv Detail & Related papers (2021-07-18T00:00:48Z) - Formula RL: Deep Reinforcement Learning for Autonomous Racing using
Telemetry Data [4.042350304426975]
We frame the problem as a reinforcement learning task with a multidimensional input consisting of the vehicle telemetry, and a continuous action space.
We put 10 variants of deep deterministic policy gradient (DDPG) to race in two experiments.
Our studies show that models trained with RL are not only able to drive faster than the baseline open source handcrafted bots but also generalize to unknown tracks.
arXiv Detail & Related papers (2021-04-22T14:40:12Z) - Model-based versus Model-free Deep Reinforcement Learning for Autonomous
Racing Cars [46.64253693115981]
This paper investigates how model-based deep reinforcement learning agents generalize to real-world autonomous-vehicle control-tasks.
We show that model-based agents capable of learning in imagination, substantially outperform model-free agents with respect to performance, sample efficiency, successful task completion, and generalization.
arXiv Detail & Related papers (2021-03-08T17:15:23Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.