Local Motion Planner for Autonomous Navigation in Vineyards with a RGB-D
Camera-Based Algorithm and Deep Learning Synergy
- URL: http://arxiv.org/abs/2005.12815v1
- Date: Tue, 26 May 2020 15:47:42 GMT
- Title: Local Motion Planner for Autonomous Navigation in Vineyards with a RGB-D
Camera-Based Algorithm and Deep Learning Synergy
- Authors: Diego Aghi, Vittorio Mazzia, Marcello Chiaberge
- Abstract summary: This study presents a low-cost local motion planner for autonomous navigation in vineyards.
The first algorithm exploits the disparity map and its depth representation to generate a proportional control for the robotic platform.
A second back-up algorithm, based on representations learning and resilient to illumination variations, can take control of the machine in case of a momentaneous failure of the first block.
- Score: 1.0312968200748118
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: With the advent of agriculture 3.0 and 4.0, researchers are increasingly
focusing on the development of innovative smart farming and precision
agriculture technologies by introducing automation and robotics into the
agricultural processes. Autonomous agricultural field machines have been
gaining significant attention from farmers and industries to reduce costs,
human workload, and required resources. Nevertheless, achieving sufficient
autonomous navigation capabilities requires the simultaneous cooperation of
different processes; localization, mapping, and path planning are just some of
the steps that aim at providing to the machine the right set of skills to
operate in semi-structured and unstructured environments. In this context, this
study presents a low-cost local motion planner for autonomous navigation in
vineyards based only on an RGB-D camera, low range hardware, and a dual layer
control algorithm. The first algorithm exploits the disparity map and its depth
representation to generate a proportional control for the robotic platform.
Concurrently, a second back-up algorithm, based on representations learning and
resilient to illumination variations, can take control of the machine in case
of a momentaneous failure of the first block. Moreover, due to the double
nature of the system, after initial training of the deep learning model with an
initial dataset, the strict synergy between the two algorithms opens the
possibility of exploiting new automatically labeled data, coming from the
field, to extend the existing model knowledge. The machine learning algorithm
has been trained and tested, using transfer learning, with acquired images
during different field surveys in the North region of Italy and then optimized
for on-device inference with model pruning and quantization. Finally, the
overall system has been validated with a customized robot platform in the
relevant environment.
Related papers
- Current applications and potential future directions of reinforcement learning-based Digital Twins in agriculture [2.699900017799093]
This review aims to categorize existing research employing reinforcement learning in agricultural settings by application domains like robotics, greenhouse management, irrigation systems, and crop management.
It also categorizes the reinforcement learning techniques used, including tabular methods, Deep Q-Networks (DQN), Policy Gradient methods, and Actor-Critic algorithms.
The review seeks to provide insights into the state-of-the-art in integrating Digital Twins and reinforcement learning in agriculture.
arXiv Detail & Related papers (2024-06-13T06:38:09Z) - Open-sourced Data Ecosystem in Autonomous Driving: the Present and Future [130.87142103774752]
This review systematically assesses over seventy open-source autonomous driving datasets.
It offers insights into various aspects, such as the principles underlying the creation of high-quality datasets.
It also delves into the scientific and technical challenges that warrant resolution.
arXiv Detail & Related papers (2023-12-06T10:46:53Z) - RoboGen: Towards Unleashing Infinite Data for Automated Robot Learning via Generative Simulation [68.70755196744533]
RoboGen is a generative robotic agent that automatically learns diverse robotic skills at scale via generative simulation.
Our work attempts to extract the extensive and versatile knowledge embedded in large-scale models and transfer them to the field of robotics.
arXiv Detail & Related papers (2023-11-02T17:59:21Z) - Don't Start From Scratch: Leveraging Prior Data to Automate Robotic
Reinforcement Learning [70.70104870417784]
Reinforcement learning (RL) algorithms hold the promise of enabling autonomous skill acquisition for robotic systems.
In practice, real-world robotic RL typically requires time consuming data collection and frequent human intervention to reset the environment.
In this work, we study how these challenges can be tackled by effective utilization of diverse offline datasets collected from previously seen tasks.
arXiv Detail & Related papers (2022-07-11T08:31:22Z) - Position-Agnostic Autonomous Navigation in Vineyards with Deep
Reinforcement Learning [1.2599533416395767]
We propose a cutting-edge lightweight solution to tackle the problem of autonomous vineyard navigation without exploiting precise localization data and overcoming task-tailored algorithms with a flexible learning-based approach.
We train an end-to-end sensorimotor agent which directly maps noisy depth images and position-agnostic robot state information to velocity commands and guides the robot to the end of a row, continuously adjusting its heading for a collision-free central trajectory.
arXiv Detail & Related papers (2022-06-28T17:03:37Z) - Navigational Path-Planning For All-Terrain Autonomous Agricultural Robot [0.0]
This report compares novel algorithms for autonomous navigation of farmlands.
High-resolution grid map representation is taken into consideration specific to Indian environments.
Results proved the applicability of the algorithms for autonomous field navigation and feasibility with robotic path planning.
arXiv Detail & Related papers (2021-09-05T07:29:13Z) - SABER: Data-Driven Motion Planner for Autonomously Navigating
Heterogeneous Robots [112.2491765424719]
We present an end-to-end online motion planning framework that uses a data-driven approach to navigate a heterogeneous robot team towards a global goal.
We use model predictive control (SMPC) to calculate control inputs that satisfy robot dynamics, and consider uncertainty during obstacle avoidance with chance constraints.
recurrent neural networks are used to provide a quick estimate of future state uncertainty considered in the SMPC finite-time horizon solution.
A Deep Q-learning agent is employed to serve as a high-level path planner, providing the SMPC with target positions that move the robots towards a desired global goal.
arXiv Detail & Related papers (2021-08-03T02:56:21Z) - Deep Semantic Segmentation at the Edge for Autonomous Navigation in
Vineyard Rows [0.0]
Precision agriculture aims at introducing affordable and effective automation into agricultural processes.
Our novel proposed control leverages the latest advancement in machine perception and edge AI techniques to achieve highly affordable and reliable navigation inside vineyard rows.
The segmentation maps generated by the control algorithm itself could be directly exploited as filters for a vegetative assessment of the crop status.
arXiv Detail & Related papers (2021-07-01T18:51:58Z) - Simultaneous Navigation and Construction Benchmarking Environments [73.0706832393065]
We need intelligent robots for mobile construction, the process of navigating in an environment and modifying its structure according to a geometric design.
In this task, a major robot vision and learning challenge is how to exactly achieve the design without GPS.
We benchmark the performance of a handcrafted policy with basic localization and planning, and state-of-the-art deep reinforcement learning methods.
arXiv Detail & Related papers (2021-03-31T00:05:54Z) - Domain Adaptive Robotic Gesture Recognition with Unsupervised
Kinematic-Visual Data Alignment [60.31418655784291]
We propose a novel unsupervised domain adaptation framework which can simultaneously transfer multi-modality knowledge, i.e., both kinematic and visual data, from simulator to real robot.
It remedies the domain gap with enhanced transferable features by using temporal cues in videos, and inherent correlations in multi-modal towards recognizing gesture.
Results show that our approach recovers the performance with great improvement gains, up to 12.91% in ACC and 20.16% in F1score without using any annotations in real robot.
arXiv Detail & Related papers (2021-03-06T09:10:03Z) - A Robust Real-Time Computing-based Environment Sensing System for
Intelligent Vehicle [5.919822775295222]
We build a real-time advanced driver assistance system based on a low-power mobile platform.
The system is a real-time multi-scheme integrated innovation system, which combines stereo matching algorithm with machine learning based obstacle detection approach.
The experimental results show that the system can achieve robust and accurate real-time environment perception for intelligent vehicles.
arXiv Detail & Related papers (2020-01-27T10:47:50Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.