Towards Agrobots: Trajectory Control of an Autonomous Tractor Using
Type-2 Fuzzy Logic Controllers
- URL: http://arxiv.org/abs/2104.04123v1
- Date: Fri, 9 Apr 2021 00:46:23 GMT
- Title: Towards Agrobots: Trajectory Control of an Autonomous Tractor Using
Type-2 Fuzzy Logic Controllers
- Authors: Erdal Kayacan, Erkan Kayacan, Herman Ramon, Okyay Kaynak and Wouter
Saeys
- Abstract summary: In this study, a proportional-integral-derivative controller is used to control the longitudinal velocity of the tractor.
For the control of the yaw angle dynamics, a proportional-derivative controller works in parallel with a type-2 fuzzy neural network.
We develop a control algorithm which learns the interactions online from the measured feedback error.
- Score: 12.015055060690742
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Provision of some autonomous functions to an agricultural vehicle would
lighten the job of the operator but in doing so, the accuracy should not be
lost to still obtain an optimal yield. Autonomous navigation of an agricultural
vehicle involves the control of different dynamic subsystems, such as the yaw
angle dynamics and the longitudinal speed dynamics. In this study, a
proportional-integral-derivative controller is used to control the longitudinal
velocity of the tractor. For the control of the yaw angle dynamics, a
proportional-derivative controller works in parallel with a type-2 fuzzy neural
network. In such an arrangement, the former ensures the stability of the
related subsystem, while the latter learns the system dynamics and becomes the
leading controller. In this way, instead of modeling the interactions between
the subsystems prior to the design of model-based control, we develop a control
algorithm which learns the interactions online from the measured feedback
error. In addition to the control of the stated subsystems, a kinematic
controller is needed to correct the errors in both the x- and the y- axis for
the trajectory tracking problem of the tractor. To demonstrate the real-time
abilities of the proposed control scheme, an autonomous tractor is equipped
with the use of reasonably priced sensors and actuators. Experimental results
show the efficacy and efficiency of the proposed learning algorithm.
Related papers
- Modelling, Positioning, and Deep Reinforcement Learning Path Tracking
Control of Scaled Robotic Vehicles: Design and Experimental Validation [3.807917169053206]
Scaled robotic cars are commonly equipped with a hierarchical control acthiecture that includes tasks dedicated to vehicle state estimation and control.
This paper covers both aspects by proposing (i) a federeted extended Kalman filter (FEKF) and (ii) a novel deep reinforcement learning (DRL) path tracking controller trained via an expert demonstrator.
The experimentally validated model is used for (i) supporting the design of the FEKF and (ii) serving as a digital twin for training the proposed DRL-based path tracking algorithm.
arXiv Detail & Related papers (2024-01-10T14:40:53Z) - A Tricycle Model to Accurately Control an Autonomous Racecar with Locked
Differential [71.53284767149685]
We present a novel formulation to model the effects of a locked differential on the lateral dynamics of an autonomous open-wheel racecar.
We include a micro-steps discretization approach to accurately linearize the dynamics and produce a prediction suitable for real-time implementation.
arXiv Detail & Related papers (2023-12-22T16:29:55Z) - DATT: Deep Adaptive Trajectory Tracking for Quadrotor Control [62.24301794794304]
Deep Adaptive Trajectory Tracking (DATT) is a learning-based approach that can precisely track arbitrary, potentially infeasible trajectories in the presence of large disturbances in the real world.
DATT significantly outperforms competitive adaptive nonlinear and model predictive controllers for both feasible smooth and infeasible trajectories in unsteady wind fields.
It can efficiently run online with an inference time less than 3.2 ms, less than 1/4 of the adaptive nonlinear model predictive control baseline.
arXiv Detail & Related papers (2023-10-13T12:22:31Z) - Model-free tracking control of complex dynamical trajectories with
machine learning [0.2356141385409842]
We develop a model-free, machine-learning framework to control a two-arm robotic manipulator.
We demonstrate the effectiveness of the control framework using a variety of periodic and chaotic signals.
arXiv Detail & Related papers (2023-09-20T17:10:10Z) - Optimal State Manipulation for a Two-Qubit System Driven by Coherent and
Incoherent Controls [77.34726150561087]
State preparation is important for optimal control of two-qubit quantum systems.
We exploit two physically different coherent control and optimize the Hilbert-Schmidt target density matrices.
arXiv Detail & Related papers (2023-04-03T10:22:35Z) - Designing a Robust Low-Level Agnostic Controller for a Quadrotor with
Actor-Critic Reinforcement Learning [0.38073142980732994]
We introduce domain randomization during the training phase of a low-level waypoint guidance controller based on Soft Actor-Critic.
We show that, by introducing a certain degree of uncertainty in quadrotor dynamics during training, we can obtain a controller that is capable to perform the proposed task using a larger variation of quadrotor parameters.
arXiv Detail & Related papers (2022-10-06T14:58:19Z) - Physics-Inspired Temporal Learning of Quadrotor Dynamics for Accurate
Model Predictive Trajectory Tracking [76.27433308688592]
Accurately modeling quadrotor's system dynamics is critical for guaranteeing agile, safe, and stable navigation.
We present a novel Physics-Inspired Temporal Convolutional Network (PI-TCN) approach to learning quadrotor's system dynamics purely from robot experience.
Our approach combines the expressive power of sparse temporal convolutions and dense feed-forward connections to make accurate system predictions.
arXiv Detail & Related papers (2022-06-07T13:51:35Z) - Comparative analysis of machine learning methods for active flow control [60.53767050487434]
Genetic Programming (GP) and Reinforcement Learning (RL) are gaining popularity in flow control.
This work presents a comparative analysis of the two, bench-marking some of their most representative algorithms against global optimization techniques.
arXiv Detail & Related papers (2022-02-23T18:11:19Z) - Data-Efficient Deep Reinforcement Learning for Attitude Control of
Fixed-Wing UAVs: Field Experiments [0.37798600249187286]
We show that DRL can successfully learn to perform attitude control of a fixed-wing UAV operating directly on the original nonlinear dynamics.
We deploy the learned controller on the UAV in flight tests, demonstrating comparable performance to the state-of-the-art ArduPlane proportional-integral-derivative (PID) attitude controller.
arXiv Detail & Related papers (2021-11-07T19:07:46Z) - Learning Adaptive Control for SE(3) Hamiltonian Dynamics [15.26733033527393]
This paper develops adaptive geometric control for rigid-body systems, such as ground, aerial, and underwater vehicles.
We learn a Hamiltonian model of the system dynamics using a neural ordinary differential equation network trained from state-control trajectory data.
In the second stage, we design a trajectory tracking controller with disturbance compensation from an energy-based perspective.
arXiv Detail & Related papers (2021-09-21T05:54:28Z) - Bidirectional Interaction between Visual and Motor Generative Models
using Predictive Coding and Active Inference [68.8204255655161]
We propose a neural architecture comprising a generative model for sensory prediction, and a distinct generative model for motor trajectories.
We highlight how sequences of sensory predictions can act as rails guiding learning, control and online adaptation of motor trajectories.
arXiv Detail & Related papers (2021-04-19T09:41:31Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.