Control of rough terrain vehicles using deep reinforcement learning
- URL: http://arxiv.org/abs/2107.01867v1
- Date: Mon, 5 Jul 2021 08:43:05 GMT
- Title: Control of rough terrain vehicles using deep reinforcement learning
- Authors: Viktor Wiberg, Erik Wallin, Martin Servin, Tomas Nordfjell
- Abstract summary: This letter presents a controller that perceives, plans, and successfully controls a 16-tonne forestry vehicle.
The carefully shaped reward signal promotes safe, environmental, and efficient driving.
We test learned skills in a virtual environment, including terrains reconstructed from high-density laser scans of forest sites.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We explore the potential to control terrain vehicles using deep reinforcement
in scenarios where human operators and traditional control methods are
inadequate. This letter presents a controller that perceives, plans, and
successfully controls a 16-tonne forestry vehicle with two frame articulation
joints, six wheels, and their actively articulated suspensions to traverse
rough terrain. The carefully shaped reward signal promotes safe, environmental,
and efficient driving, which leads to the emergence of unprecedented driving
skills. We test learned skills in a virtual environment, including terrains
reconstructed from high-density laser scans of forest sites. The controller
displays the ability to handle obstructing obstacles, slopes up to 27$^\circ$,
and a variety of natural terrains, all with limited wheel slip, smooth, and
upright traversal with intelligent use of the active suspensions. The results
confirm that deep reinforcement learning has the potential to enhance control
of vehicles with complex dynamics and high-dimensional observation data
compared to human operators or traditional control methods, especially in rough
terrain.
Related papers
- WROOM: An Autonomous Driving Approach for Off-Road Navigation [17.74237088460657]
We design an end-to-end reinforcement learning (RL) system for an autonomous vehicle in off-road environments.
We warm-start the agent by imitating a rule-based controller and utilize Proximal Policy Optimization (PPO) to improve the policy.
We propose a novel simulation environment to replicate off-road driving scenarios and deploy our proposed approach on a real buggy RC car.
arXiv Detail & Related papers (2024-04-12T23:55:59Z) - You Only Crash Once: Improved Object Detection for Real-Time,
Sim-to-Real Hazardous Terrain Detection and Classification for Autonomous
Planetary Landings [7.201292864036088]
A cheap and effective way of detecting hazardous terrain is through the use of visual cameras.
Traditional techniques for visual hazardous terrain detection focus on template matching and registration to pre-built hazard maps.
We introduce You Only Crash Once (YOCO), a deep learning-based visual hazardous terrain detection and classification technique.
arXiv Detail & Related papers (2023-03-08T21:11:51Z) - Tackling Real-World Autonomous Driving using Deep Reinforcement Learning [63.3756530844707]
In this work, we propose a model-free Deep Reinforcement Learning Planner training a neural network that predicts acceleration and steering angle.
In order to deploy the system on board the real self-driving car, we also develop a module represented by a tiny neural network.
arXiv Detail & Related papers (2022-07-05T16:33:20Z) - Learning energy-efficient driving behaviors by imitating experts [75.12960180185105]
This paper examines the role of imitation learning in bridging the gap between control strategies and realistic limitations in communication and sensing.
We show that imitation learning can succeed in deriving policies that, if adopted by 5% of vehicles, may boost the energy-efficiency of networks with varying traffic conditions by 15% using only local observations.
arXiv Detail & Related papers (2022-06-28T17:08:31Z) - Differentiable Control Barrier Functions for Vision-based End-to-End
Autonomous Driving [100.57791628642624]
We introduce a safety guaranteed learning framework for vision-based end-to-end autonomous driving.
We design a learning system equipped with differentiable control barrier functions (dCBFs) that is trained end-to-end by gradient descent.
arXiv Detail & Related papers (2022-03-04T16:14:33Z) - Learning Coordinated Terrain-Adaptive Locomotion by Imitating a
Centroidal Dynamics Planner [27.476911967228926]
Reinforcement Learning (RL) can learn dynamic reactive controllers but require carefully tuned shaping rewards to produce good gaits.
Imitation learning circumvents this problem and has been used with motion capture data to extract quadruped gaits for flat terrains.
We show that the learned policies transfer to unseen terrains and can be fine-tuned to dynamically traverse challenging terrains.
arXiv Detail & Related papers (2021-10-30T14:24:39Z) - Learning Perceptual Locomotion on Uneven Terrains using Sparse Visual
Observations [75.60524561611008]
This work aims to exploit the use of sparse visual observations to achieve perceptual locomotion over a range of commonly seen bumps, ramps, and stairs in human-centred environments.
We first formulate the selection of minimal visual input that can represent the uneven surfaces of interest, and propose a learning framework that integrates such exteroceptive and proprioceptive data.
We validate the learned policy in tasks that require omnidirectional walking over flat ground and forward locomotion over terrains with obstacles, showing a high success rate.
arXiv Detail & Related papers (2021-09-28T20:25:10Z) - Reinforcement Learning for Robust Parameterized Locomotion Control of
Bipedal Robots [121.42930679076574]
We present a model-free reinforcement learning framework for training robust locomotion policies in simulation.
domain randomization is used to encourage the policies to learn behaviors that are robust across variations in system dynamics.
We demonstrate this on versatile walking behaviors such as tracking a target walking velocity, walking height, and turning yaw.
arXiv Detail & Related papers (2021-03-26T07:14:01Z) - Learning Quadrupedal Locomotion over Challenging Terrain [68.51539602703662]
Legged locomotion can dramatically expand the operational domains of robotics.
Conventional controllers for legged locomotion are based on elaborate state machines that explicitly trigger the execution of motion primitives and reflexes.
Here we present a radically robust controller for legged locomotion in challenging natural environments.
arXiv Detail & Related papers (2020-10-21T19:11:20Z) - High-speed Autonomous Drifting with Deep Reinforcement Learning [15.766089739894207]
We propose a robust drift controller without explicit motion equations.
Our controller is capable of making the vehicle drift through various sharp corners quickly and stably in the unseen map.
arXiv Detail & Related papers (2020-01-06T03:05:52Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.