Path Following and Stabilisation of a Bicycle Model using a Reinforcement Learning Approach
- URL: http://arxiv.org/abs/2407.17156v1
- Date: Wed, 24 Jul 2024 10:54:23 GMT
- Title: Path Following and Stabilisation of a Bicycle Model using a Reinforcement Learning Approach
- Authors: Sebastian Weyrer, Peter Manzl, A. L. Schwab, Johannes Gerstmayr,
- Abstract summary: This work introduces an RL approach to do path following with a virtual bicycle model while simultaneously stabilising it laterally.
The agent succeeds in both path following and stabilisation of the bicycle model exclusively by outputting steering angles.
The performance of the deployed agents is evaluated using different types of paths and measurements.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Over the years, complex control approaches have been developed to control the motion of a bicycle. Reinforcement Learning (RL), a branch of machine learning, promises easy deployment of so-called agents. Deployed agents are increasingly considered as an alternative to controllers for mechanical systems. The present work introduces an RL approach to do path following with a virtual bicycle model while simultaneously stabilising it laterally. The bicycle, modelled as the Whipple benchmark model and using multibody system dynamics, has no stabilisation aids. The agent succeeds in both path following and stabilisation of the bicycle model exclusively by outputting steering angles, which are converted into steering torques via a PD controller. Curriculum learning is applied as a state-of-the-art training strategy. Different settings for the implemented RL framework are investigated and compared to each other. The performance of the deployed agents is evaluated using different types of paths and measurements. The ability of the deployed agents to do path following and stabilisation of the bicycle model travelling between 2m/s and 7m/s along complex paths including full circles, slalom manoeuvres, and lane changes is demonstrated. Explanatory methods for machine learning are used to analyse the functionality of a deployed agent and link the introduced RL approach with research in the field of bicycle dynamics.
Related papers
- Reinforcement Learning for Versatile, Dynamic, and Robust Bipedal Locomotion Control [106.32794844077534]
This paper presents a study on using deep reinforcement learning to create dynamic locomotion controllers for bipedal robots.
We develop a general control solution that can be used for a range of dynamic bipedal skills, from periodic walking and running to aperiodic jumping and standing.
This work pushes the limits of agility for bipedal robots through extensive real-world experiments.
arXiv Detail & Related papers (2024-01-30T10:48:43Z) - Multi-Agent Deep Reinforcement Learning for Cooperative and Competitive
Autonomous Vehicles using AutoDRIVE Ecosystem [1.1893676124374688]
We introduce AutoDRIVE Ecosystem as an enabler to develop physically accurate and graphically realistic digital twins of Nigel and F1TENTH.
We first investigate an intersection problem using a set of cooperative vehicles (Nigel) that share limited state information with each other in single as well as multi-agent learning settings.
We then investigate an adversarial head-to-head autonomous racing problem using a different set of vehicles (F1TENTH) in a multi-agent learning setting using an individual policy approach.
arXiv Detail & Related papers (2023-09-18T02:43:59Z) - Tuning Path Tracking Controllers for Autonomous Cars Using Reinforcement
Learning [0.0]
This paper proposes an adaptable path tracking control system based on Reinforcement Learning (RL) for autonomous cars.
The tuning of the tracker uses an educated Q-Learning algorithm to minimize the lateral and steering trajectory errors.
arXiv Detail & Related papers (2023-01-09T14:17:12Z) - NeurIPS 2022 Competition: Driving SMARTS [60.948652154552136]
Driving SMARTS is a regular competition designed to tackle problems caused by the distribution shift in dynamic interaction contexts.
The proposed competition supports methodologically diverse solutions, such as reinforcement learning (RL) and offline learning methods.
arXiv Detail & Related papers (2022-11-14T17:10:53Z) - Unified Automatic Control of Vehicular Systems with Reinforcement
Learning [64.63619662693068]
This article contributes a streamlined methodology for vehicular microsimulation.
It discovers high performance control strategies with minimal manual design.
The study reveals numerous emergent behaviors resembling wave mitigation, traffic signaling, and ramp metering.
arXiv Detail & Related papers (2022-07-30T16:23:45Z) - Eco-driving for Electric Connected Vehicles at Signalized Intersections:
A Parameterized Reinforcement Learning approach [6.475252042082737]
This paper proposes an eco-driving framework for electric connected vehicles (CVs) based on reinforcement learning (RL)
We show that our strategy can significantly reduce energy consumption by learning proper action schemes without any interruption of other human-driven vehicles (HDVs)
arXiv Detail & Related papers (2022-06-24T04:11:28Z) - Training and Evaluation of Deep Policies using Reinforcement Learning
and Generative Models [67.78935378952146]
GenRL is a framework for solving sequential decision-making problems.
It exploits the combination of reinforcement learning and latent variable generative models.
We experimentally determine the characteristics of generative models that have most influence on the performance of the final policy training.
arXiv Detail & Related papers (2022-04-18T22:02:32Z) - Learning to drive from a world on rails [78.28647825246472]
We learn an interactive vision-based driving policy from pre-recorded driving logs via a model-based approach.
A forward model of the world supervises a driving policy that predicts the outcome of any potential driving trajectory.
Our method ranks first on the CARLA leaderboard, attaining a 25% higher driving score while using 40 times less data.
arXiv Detail & Related papers (2021-05-03T05:55:30Z) - Dynamic Bicycle Dispatching of Dockless Public Bicycle-sharing Systems
using Multi-objective Reinforcement Learning [79.61517670541863]
How to use AI to provide efficient bicycle dispatching solutions based on dynamic bicycle rental demand is an essential issue for dockless PBS (DL-PBS)
We propose a dynamic bicycle dispatching algorithm based on multi-objective reinforcement learning (MORL-BD) to provide the optimal bicycle dispatching solution for DL-PBS.
arXiv Detail & Related papers (2021-01-19T03:09:51Z) - RLOC: Terrain-Aware Legged Locomotion using Reinforcement Learning and
Optimal Control [6.669503016190925]
We present a unified model-based and data-driven approach for quadrupedal planning and control.
We map sensory information and desired base velocity commands into footstep plans using a reinforcement learning policy.
We train and evaluate our framework on a complex quadrupedal system, ANYmal B, and demonstrate transferability to a larger and heavier robot, ANYmal C, without requiring retraining.
arXiv Detail & Related papers (2020-12-05T18:30:23Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.