Towards Autonomous Crop-Agnostic Visual Navigation in Arable Fields
- URL: http://arxiv.org/abs/2109.11936v1
- Date: Fri, 24 Sep 2021 12:54:42 GMT
- Title: Towards Autonomous Crop-Agnostic Visual Navigation in Arable Fields
- Authors: Alireza Ahmadi, Michael Halstead, and Chris McCool
- Abstract summary: We introduce a vision-based navigation scheme which is able to reliably guide the robot through row-crop fields.
With the help of a novel crop-row detection and a novel crop-row switching technique, our navigation scheme can be deployed in a wide range of fields.
- Score: 2.6323812778809907
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Autonomous navigation of a robot in agricultural fields is essential for
every task from crop monitoring through to weed management and fertilizer
application. Many current approaches rely on accurate GPS, however, such
technology is expensive and also prone to failure~(e.g. through lack of
coverage). As such, navigation through sensors that can interpret their
environment (such as cameras) is important to achieve the goal of autonomy in
agriculture. In this paper, we introduce a purely vision-based navigation
scheme which is able to reliably guide the robot through row-crop fields.
Independent of any global localization or mapping, this approach is able to
accurately follow the crop-rows and switch between the rows, only using
on-board cameras. With the help of a novel crop-row detection and a novel
crop-row switching technique, our navigation scheme can be deployed in a wide
range of fields with different canopy types in various growth stages. We have
extensively tested our approach in five different fields under various
illumination conditions using our agricultural robotic platform (BonnBot-I).
And our evaluations show that we have achieved a navigation accuracy of 3.82cm
over five different crop fields.
Related papers
- AdaCropFollow: Self-Supervised Online Adaptation for Visual Under-Canopy Navigation [31.214318150001947]
Under-canopy agricultural robots can enable various applications like precise monitoring, spraying, weeding, and plant manipulation tasks.
We propose a self-supervised online adaptation method for adapting the semantic keypoint representation using a visual foundational model, geometric prior, and pseudo labeling.
This can enable fully autonomous row-following capability in under-canopy robots across fields and crops without requiring human intervention.
arXiv Detail & Related papers (2024-10-16T09:52:38Z) - Lessons from Deploying CropFollow++: Under-Canopy Agricultural Navigation with Keypoints [4.825377557319356]
We present a vision-based navigation system for under-canopy agricultural robots using semantic keypoints.
Our system, CropFollow++, introduces modular and interpretable perception architecture with a learned semantic keypoint representation.
arXiv Detail & Related papers (2024-04-26T22:46:17Z) - A Vision-Based Navigation System for Arable Fields [7.338061223686544]
Vision-based navigation systems in arable fields are an underexplored area in agricultural robot navigation.
Current solutions are often crop-specific and aimed to address limited individual conditions such as illumination or weed density.
This paper proposes a suite of deep learning-based perception algorithms using affordable vision sensors for vision-based navigation in arable fields.
arXiv Detail & Related papers (2023-09-21T12:01:59Z) - Fast Traversability Estimation for Wild Visual Navigation [17.015268056925745]
We propose Wild Visual Navigation (WVN), an online self-supervised learning system for traversability estimation.
The system is able to continuously adapt from a short human demonstration in the field.
We demonstrate the advantages of our approach with experiments and ablation studies in challenging environments in forests, parks, and grasslands.
arXiv Detail & Related papers (2023-05-15T10:19:30Z) - Vision-based Vineyard Navigation Solution with Automatic Annotation [2.6013566739979463]
We introduce a vision-based autonomous navigation framework for agriculture robots in trellised cropping systems such as vineyards.
We propose a novel learning-based method to estimate the path traversibility heatmap directly from an RGB-D image.
A trained path detection model was used to develop a full navigation framework consisting of row tracking and row switching modules.
arXiv Detail & Related papers (2023-03-25T03:37:17Z) - GNM: A General Navigation Model to Drive Any Robot [67.40225397212717]
General goal-conditioned model for vision-based navigation can be trained on data obtained from many distinct but structurally similar robots.
We analyze the necessary design decisions for effective data sharing across robots.
We deploy the trained GNM on a range of new robots, including an under quadrotor.
arXiv Detail & Related papers (2022-10-07T07:26:41Z) - Incremental 3D Scene Completion for Safe and Efficient Exploration
Mapping and Planning [60.599223456298915]
We propose a novel way to integrate deep learning into exploration by leveraging 3D scene completion for informed, safe, and interpretable mapping and planning.
We show that our method can speed up coverage of an environment by 73% compared to the baselines with only minimal reduction in map accuracy.
Even if scene completions are not included in the final map, we show that they can be used to guide the robot to choose more informative paths, speeding up the measurement of the scene with the robot's sensors by 35%.
arXiv Detail & Related papers (2022-08-17T14:19:33Z) - Coupling Vision and Proprioception for Navigation of Legged Robots [65.59559699815512]
We exploit the complementary strengths of vision and proprioception to achieve point goal navigation in a legged robot.
We show superior performance compared to wheeled robot (LoCoBot) baselines.
We also show the real-world deployment of our system on a quadruped robot with onboard sensors and compute.
arXiv Detail & Related papers (2021-12-03T18:59:59Z) - Polyline Based Generative Navigable Space Segmentation for Autonomous
Visual Navigation [57.3062528453841]
We propose a representation-learning-based framework to enable robots to learn the navigable space segmentation in an unsupervised manner.
We show that the proposed PSV-Nets can learn the visual navigable space with high accuracy, even without any single label.
arXiv Detail & Related papers (2021-10-29T19:50:48Z) - ViNG: Learning Open-World Navigation with Visual Goals [82.84193221280216]
We propose a learning-based navigation system for reaching visually indicated goals.
We show that our system, which we call ViNG, outperforms previously-proposed methods for goal-conditioned reinforcement learning.
We demonstrate ViNG on a number of real-world applications, such as last-mile delivery and warehouse inspection.
arXiv Detail & Related papers (2020-12-17T18:22:32Z) - BADGR: An Autonomous Self-Supervised Learning-Based Navigation System [158.6392333480079]
BadGR is an end-to-end learning-based mobile robot navigation system.
It can be trained with self-supervised off-policy data gathered in real-world environments.
BadGR can navigate in real-world urban and off-road environments with geometrically distracting obstacles.
arXiv Detail & Related papers (2020-02-13T18:40:21Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.