Learning to Control Direct Current Motor for Steering in Real Time via
Reinforcement Learning
- URL: http://arxiv.org/abs/2108.00138v1
- Date: Sat, 31 Jul 2021 03:24:36 GMT
- Title: Learning to Control Direct Current Motor for Steering in Real Time via
Reinforcement Learning
- Authors: Thomas Watson, Bibek Poudel
- Abstract summary: We make use of the NFQ algorithm for steering position control of a golf cart in both a real hardware and a simulated environment.
We were able to increase the rate of successful control under four minutes in simulation and under 11 minutes in real hardware.
- Score: 2.3554584457413483
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Model free techniques have been successful at optimal control of complex
systems at an expense of copious amounts of data and computation. However, it
is often desired to obtain a control policy in a short period of time with
minimal data use and computational burden. To this end, we make use of the NFQ
algorithm for steering position control of a golf cart in both a real hardware
and a simulated environment that was built from real-world interaction. The
controller learns to apply a sequence of voltage signals in the presence of
environmental uncertainties and inherent non-linearities that challenge the the
control task. We were able to increase the rate of successful control under
four minutes in simulation and under 11 minutes in real hardware.
Related papers
- Autonomous Vehicle Controllers From End-to-End Differentiable Simulation [60.05963742334746]
We propose a differentiable simulator and design an analytic policy gradients (APG) approach to training AV controllers.
Our proposed framework brings the differentiable simulator into an end-to-end training loop, where gradients of environment dynamics serve as a useful prior to help the agent learn a more grounded policy.
We find significant improvements in performance and robustness to noise in the dynamics, as well as overall more intuitive human-like handling.
arXiv Detail & Related papers (2024-09-12T11:50:06Z) - Integrating DeepRL with Robust Low-Level Control in Robotic Manipulators for Non-Repetitive Reaching Tasks [0.24578723416255746]
In robotics, contemporary strategies are learning-based, characterized by a complex black-box nature and a lack of interpretability.
We propose integrating a collision-free trajectory planner based on deep reinforcement learning (DRL) with a novel auto-tuning low-level control strategy.
arXiv Detail & Related papers (2024-02-04T15:54:03Z) - Learning to Fly in Seconds [7.259696592534715]
We show how curriculum learning and a highly optimized simulator enhance sample complexity and lead to fast training times.
Our framework enables Simulation-to-Reality (Sim2Real) transfer for direct control after only 18 seconds of training on a consumer-grade laptop.
arXiv Detail & Related papers (2023-11-22T01:06:45Z) - Real-Time Model-Free Deep Reinforcement Learning for Force Control of a
Series Elastic Actuator [56.11574814802912]
State-of-the art robotic applications utilize series elastic actuators (SEAs) with closed-loop force control to achieve complex tasks such as walking, lifting, and manipulation.
Model-free PID control methods are more prone to instability due to nonlinearities in the SEA.
Deep reinforcement learning has proved to be an effective model-free method for continuous control tasks.
arXiv Detail & Related papers (2023-04-11T00:51:47Z) - Improving the Performance of Robust Control through Event-Triggered
Learning [74.57758188038375]
We propose an event-triggered learning algorithm that decides when to learn in the face of uncertainty in the LQR problem.
We demonstrate improved performance over a robust controller baseline in a numerical example.
arXiv Detail & Related papers (2022-07-28T17:36:37Z) - Tackling Real-World Autonomous Driving using Deep Reinforcement Learning [63.3756530844707]
In this work, we propose a model-free Deep Reinforcement Learning Planner training a neural network that predicts acceleration and steering angle.
In order to deploy the system on board the real self-driving car, we also develop a module represented by a tiny neural network.
arXiv Detail & Related papers (2022-07-05T16:33:20Z) - Accelerated Policy Learning with Parallel Differentiable Simulation [59.665651562534755]
We present a differentiable simulator and a new policy learning algorithm (SHAC)
Our algorithm alleviates problems with local minima through a smooth critic function.
We show substantial improvements in sample efficiency and wall-clock time over state-of-the-art RL and differentiable simulation-based algorithms.
arXiv Detail & Related papers (2022-04-14T17:46:26Z) - Using Simulation Optimization to Improve Zero-shot Policy Transfer of
Quadrotors [0.14999444543328289]
We show that it is possible to train low-level control policies with reinforcement learning entirely in simulation and deploy them on a quadrotor robot without using real-world data to fine-tune.
Our neural network-based policies use only onboard sensor data and run entirely on the embedded drone hardware.
arXiv Detail & Related papers (2022-01-04T22:32:05Z) - Data-Efficient Deep Reinforcement Learning for Attitude Control of
Fixed-Wing UAVs: Field Experiments [0.37798600249187286]
We show that DRL can successfully learn to perform attitude control of a fixed-wing UAV operating directly on the original nonlinear dynamics.
We deploy the learned controller on the UAV in flight tests, demonstrating comparable performance to the state-of-the-art ArduPlane proportional-integral-derivative (PID) attitude controller.
arXiv Detail & Related papers (2021-11-07T19:07:46Z) - Learning a Contact-Adaptive Controller for Robust, Efficient Legged
Locomotion [95.1825179206694]
We present a framework that synthesizes robust controllers for a quadruped robot.
A high-level controller learns to choose from a set of primitives in response to changes in the environment.
A low-level controller that utilizes an established control method to robustly execute the primitives.
arXiv Detail & Related papers (2020-09-21T16:49:26Z) - Vision-Based Autonomous Drone Control using Supervised Learning in
Simulation [0.0]
We propose a vision-based control approach using Supervised Learning for autonomous navigation and landing of MAVs in indoor environments.
We trained a Convolutional Neural Network (CNN) that maps low resolution image and sensor input to high-level control commands.
Our approach requires shorter training times than similar Reinforcement Learning approaches and can potentially overcome the limitations of manual data collection faced by comparable Supervised Learning approaches.
arXiv Detail & Related papers (2020-09-09T13:45:41Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.