A Vision-Based Navigation System for Arable Fields
- URL: http://arxiv.org/abs/2309.11989v2
- Date: Tue, 28 May 2024 09:07:38 GMT
- Title: A Vision-Based Navigation System for Arable Fields
- Authors: Rajitha de Silva, Grzegorz Cielniak, Junfeng Gao,
- Abstract summary: Vision-based navigation systems in arable fields are an underexplored area in agricultural robot navigation.
Current solutions are often crop-specific and aimed to address limited individual conditions such as illumination or weed density.
This paper proposes a suite of deep learning-based perception algorithms using affordable vision sensors for vision-based navigation in arable fields.
- Score: 7.338061223686544
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Vision-based navigation systems in arable fields are an underexplored area in agricultural robot navigation. Vision systems deployed in arable fields face challenges such as fluctuating weed density, varying illumination levels, growth stages and crop row irregularities. Current solutions are often crop-specific and aimed to address limited individual conditions such as illumination or weed density. Moreover, the scarcity of comprehensive datasets hinders the development of generalised machine learning systems for navigating these fields. This paper proposes a suite of deep learning-based perception algorithms using affordable vision sensors for vision-based navigation in arable fields. Initially, a comprehensive dataset that captures the intricacies of multiple crop seasons, various crop types, and a range of field variations was compiled. Next, this study delves into the creation of robust infield perception algorithms capable of accurately detecting crop rows under diverse conditions such as different growth stages, weed density, and varying illumination. Further, it investigates the integration of crop row following with vision-based crop row switching for efficient field-scale navigation. The proposed infield navigation system was tested in commercial arable fields traversing a total distance of 4.5 km with average heading and cross-track errors of 1.24{\deg} and 3.32 cm respectively.
Related papers
- Helvipad: A Real-World Dataset for Omnidirectional Stereo Depth Estimation [83.841877607646]
We introduce Helvipad, a real-world dataset for omnidirectional stereo depth estimation.
The dataset includes accurate depth and disparity labels by projecting 3D point clouds onto equirectangular images.
We benchmark leading stereo depth estimation models for both standard and omnidirectional images.
arXiv Detail & Related papers (2024-11-27T13:34:41Z) - Amirkabir campus dataset: Real-world challenges and scenarios of Visual
Inertial Odometry (VIO) for visually impaired people [3.7998592843098336]
We introduce the Amirkabir campus dataset (AUT-VI) to address the mentioned problem and improve the navigation systems.
AUT-VI is a novel and super-challenging dataset with 126 diverse sequences in 17 different locations.
In support of ongoing development efforts, we have released the Android application for data capture to the public.
arXiv Detail & Related papers (2024-01-07T23:13:51Z) - OccNeRF: Advancing 3D Occupancy Prediction in LiDAR-Free Environments [77.0399450848749]
We propose an OccNeRF method for training occupancy networks without 3D supervision.
We parameterize the reconstructed occupancy fields and reorganize the sampling strategy to align with the cameras' infinite perceptive range.
For semantic occupancy prediction, we design several strategies to polish the prompts and filter the outputs of a pretrained open-vocabulary 2D segmentation model.
arXiv Detail & Related papers (2023-12-14T18:58:52Z) - CWD30: A Comprehensive and Holistic Dataset for Crop Weed Recognition in
Precision Agriculture [1.64709990449384]
We present the CWD30 dataset, a large-scale, diverse, holistic, and hierarchical dataset tailored for crop-weed recognition tasks in precision agriculture.
CWD30 comprises over 219,770 high-resolution images of 20 weed species and 10 crop species, encompassing various growth stages, multiple viewing angles, and environmental conditions.
The dataset's hierarchical taxonomy enables fine-grained classification and facilitates the development of more accurate, robust, and generalizable deep learning models.
arXiv Detail & Related papers (2023-05-17T09:39:01Z) - Deep learning-based Crop Row Detection for Infield Navigation of
Agri-Robots [10.386591972977207]
This paper presents a robust crop row detection algorithm that withstands field variations using inexpensive cameras.
A dataset of sugar beet images was created representing 11 field variations comprised of multiple grow stages, light levels, varying weed densities, curved crop rows and discontinuous crop rows.
Our algorithm demonstrated robust vision-based crop row detection in challenging field conditions outperforming the baseline.
arXiv Detail & Related papers (2022-09-09T12:47:24Z) - VPAIR -- Aerial Visual Place Recognition and Localization in Large-scale
Outdoor Environments [49.82314641876602]
We present a new dataset named VPAIR.
The dataset was recorded on board a light aircraft flying at an altitude of more than 300 meters above ground.
The dataset covers a more than one hundred kilometers long trajectory over various types of challenging landscapes.
arXiv Detail & Related papers (2022-05-23T18:50:08Z) - Towards Space-to-Ground Data Availability for Agriculture Monitoring [0.0]
We present a space-to-ground dataset that contains Sentinel-1 radar and Sentinel-2 optical image time-series, as well as street-level images from the crowdsourcing platform Mapillary.
We train machine and deep learning algorithms on these different data domains and highlight the potential of fusion techniques towards increasing the reliability of decisions.
arXiv Detail & Related papers (2022-05-16T14:35:48Z) - Towards Autonomous Crop-Agnostic Visual Navigation in Arable Fields [2.6323812778809907]
We introduce a vision-based navigation scheme which is able to reliably guide the robot through row-crop fields.
With the help of a novel crop-row detection and a novel crop-row switching technique, our navigation scheme can be deployed in a wide range of fields.
arXiv Detail & Related papers (2021-09-24T12:54:42Z) - GANav: Group-wise Attention Network for Classifying Navigable Regions in
Unstructured Outdoor Environments [54.21959527308051]
We present a new learning-based method for identifying safe and navigable regions in off-road terrains and unstructured environments from RGB images.
Our approach consists of classifying groups of terrain classes based on their navigability levels using coarse-grained semantic segmentation.
We show through extensive evaluations on the RUGD and RELLIS-3D datasets that our learning algorithm improves the accuracy of visual perception in off-road terrains for navigation.
arXiv Detail & Related papers (2021-03-07T02:16:24Z) - A Flow Base Bi-path Network for Cross-scene Video Crowd Understanding in
Aerial View [93.23947591795897]
In this paper, we strive to tackle the challenges and automatically understand the crowd from the visual data collected from drones.
To alleviate the background noise generated in cross-scene testing, a double-stream crowd counting model is proposed.
To tackle the crowd density estimation problem under extreme dark environments, we introduce synthetic data generated by game Grand Theft Auto V(GTAV)
arXiv Detail & Related papers (2020-09-29T01:48:24Z) - OmniSLAM: Omnidirectional Localization and Dense Mapping for
Wide-baseline Multi-camera Systems [88.41004332322788]
We present an omnidirectional localization and dense mapping system for a wide-baseline multiview stereo setup with ultra-wide field-of-view (FOV) fisheye cameras.
For more practical and accurate reconstruction, we first introduce improved and light-weighted deep neural networks for the omnidirectional depth estimation.
We integrate our omnidirectional depth estimates into the visual odometry (VO) and add a loop closing module for global consistency.
arXiv Detail & Related papers (2020-03-18T05:52:10Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.