Epersist: A Self Balancing Robot Using PID Controller And Deep
Reinforcement Learning
- URL: http://arxiv.org/abs/2207.11431v1
- Date: Sat, 23 Jul 2022 06:27:21 GMT
- Title: Epersist: A Self Balancing Robot Using PID Controller And Deep
Reinforcement Learning
- Authors: Ghanta Sai Krishna, Dyavat Sumith, Garika Akshay
- Abstract summary: A two-wheeled self-balancing robot is an example of an inverse pendulum and is an inherently non-linear, unstable system.
"Epersist" is to overcome the challenge of counterbalancing an initially unstable system by delivering robust control mechanisms.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: A two-wheeled self-balancing robot is an example of an inverse pendulum and
is an inherently non-linear, unstable system. The fundamental concept of the
proposed framework "Epersist" is to overcome the challenge of counterbalancing
an initially unstable system by delivering robust control mechanisms,
Proportional Integral Derivative(PID), and Reinforcement Learning (RL).
Moreover, the micro-controller NodeMCUESP32 and inertial sensor in the Epersist
employ fewer computational procedures to give accurate instruction regarding
the spin of wheels to the motor driver, which helps control the wheels and
balance the robot. This framework also consists of the mathematical model of
the PID controller and a novel self-trained advantage actor-critic algorithm as
the RL agent. After several experiments, control variable calibrations are made
as the benchmark values to attain the angle of static equilibrium. This
"Epersist" framework proposes PID and RL-assisted functional prototypes and
simulations for better utility.
Related papers
- Leveraging Symmetry to Accelerate Learning of Trajectory Tracking Controllers for Free-Flying Robotic Systems [24.360194697715382]
Tracking controllers enable robotic systems to accurately follow planned reference trajectories.
In this work, we leverage the inherent Lie group symmetries of robotic systems with a floating base to mitigate these challenges when learning tracking controllers.
Results show that a symmetry-aware approach both accelerates training and reduces tracking error after the same number of training steps.
arXiv Detail & Related papers (2024-09-17T14:39:24Z) - Modelling, Positioning, and Deep Reinforcement Learning Path Tracking
Control of Scaled Robotic Vehicles: Design and Experimental Validation [3.807917169053206]
Scaled robotic cars are commonly equipped with a hierarchical control acthiecture that includes tasks dedicated to vehicle state estimation and control.
This paper covers both aspects by proposing (i) a federeted extended Kalman filter (FEKF) and (ii) a novel deep reinforcement learning (DRL) path tracking controller trained via an expert demonstrator.
The experimentally validated model is used for (i) supporting the design of the FEKF and (ii) serving as a digital twin for training the proposed DRL-based path tracking algorithm.
arXiv Detail & Related papers (2024-01-10T14:40:53Z) - Self-Tuning PID Control via a Hybrid Actor-Critic-Based Neural Structure
for Quadcopter Control [0.0]
Proportional-Integrator-Derivative (PID) controller is used in a wide range of industrial and experimental processes.
Due to the uncertainty of model parameters and external disturbances, real systems such as Quadrotors need more robust and reliable PID controllers.
In this research, a self-tuning PID controller using a Reinforcement-Learning-based Neural Network has been investigated.
arXiv Detail & Related papers (2023-07-03T19:35:52Z) - Real-Time Model-Free Deep Reinforcement Learning for Force Control of a
Series Elastic Actuator [56.11574814802912]
State-of-the art robotic applications utilize series elastic actuators (SEAs) with closed-loop force control to achieve complex tasks such as walking, lifting, and manipulation.
Model-free PID control methods are more prone to instability due to nonlinearities in the SEA.
Deep reinforcement learning has proved to be an effective model-free method for continuous control tasks.
arXiv Detail & Related papers (2023-04-11T00:51:47Z) - Autotuning PID control using Actor-Critic Deep Reinforcement Learning [0.0]
It is studied if the model is able to predict PID parameters based on where an apple is located.
Initial tests show that the model is indeed able to adapt its predictions to apple locations, making it an adaptive controller.
arXiv Detail & Related papers (2022-11-29T11:15:50Z) - Active Predicting Coding: Brain-Inspired Reinforcement Learning for
Sparse Reward Robotic Control Problems [79.07468367923619]
We propose a backpropagation-free approach to robotic control through the neuro-cognitive computational framework of neural generative coding (NGC)
We design an agent built completely from powerful predictive coding/processing circuits that facilitate dynamic, online learning from sparse rewards.
We show that our proposed ActPC agent performs well in the face of sparse (extrinsic) reward signals and is competitive with or outperforms several powerful backprop-based RL approaches.
arXiv Detail & Related papers (2022-09-19T16:49:32Z) - Automatic Rule Induction for Efficient Semi-Supervised Learning [56.91428251227253]
Semi-supervised learning has shown promise in allowing NLP models to generalize from small amounts of labeled data.
Pretrained transformer models act as black-box correlation engines that are difficult to explain and sometimes behave unreliably.
We propose tackling both of these challenges via Automatic Rule Induction (ARI), a simple and general-purpose framework.
arXiv Detail & Related papers (2022-05-18T16:50:20Z) - Bayesian Optimization Meets Hybrid Zero Dynamics: Safe Parameter
Learning for Bipedal Locomotion Control [17.37169551675587]
We propose a multi-domain control parameter learning framework for locomotion control of bipedal robots.
We leverage BO to learn the control parameters used in the HZD-based controller.
Next, the learning process is applied on the physical robot to learn for corrections to the control parameters learned in simulation.
arXiv Detail & Related papers (2022-03-04T20:48:17Z) - Gaussian Process-based Min-norm Stabilizing Controller for
Control-Affine Systems with Uncertain Input Effects and Dynamics [90.81186513537777]
We propose a novel compound kernel that captures the control-affine nature of the problem.
We show that this resulting optimization problem is convex, and we call it Gaussian Process-based Control Lyapunov Function Second-Order Cone Program (GP-CLF-SOCP)
arXiv Detail & Related papers (2020-11-14T01:27:32Z) - Learning Stabilizing Controllers for Unstable Linear Quadratic
Regulators from a Single Trajectory [85.29718245299341]
We study linear controllers under quadratic costs model also known as linear quadratic regulators (LQR)
We present two different semi-definite programs (SDP) which results in a controller that stabilizes all systems within an ellipsoid uncertainty set.
We propose an efficient data dependent algorithm -- textsceXploration -- that with high probability quickly identifies a stabilizing controller.
arXiv Detail & Related papers (2020-06-19T08:58:57Z) - Improving Input-Output Linearizing Controllers for Bipedal Robots via
Reinforcement Learning [85.13138591433635]
The main drawbacks of input-output linearizing controllers are the need for precise dynamics models and not being able to account for input constraints.
In this paper, we address both challenges for the specific case of bipedal robot control by the use of reinforcement learning techniques.
arXiv Detail & Related papers (2020-04-15T18:15:49Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.