Deep learning-based Crop Row Detection for Infield Navigation of
Agri-Robots
- URL: http://arxiv.org/abs/2209.04278v2
- Date: Thu, 10 Aug 2023 15:19:34 GMT
- Title: Deep learning-based Crop Row Detection for Infield Navigation of
Agri-Robots
- Authors: Rajitha de Silva, Grzegorz Cielniak, Gang Wang, Junfeng Gao
- Abstract summary: This paper presents a robust crop row detection algorithm that withstands field variations using inexpensive cameras.
A dataset of sugar beet images was created representing 11 field variations comprised of multiple grow stages, light levels, varying weed densities, curved crop rows and discontinuous crop rows.
Our algorithm demonstrated robust vision-based crop row detection in challenging field conditions outperforming the baseline.
- Score: 10.386591972977207
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Autonomous navigation in agricultural environments is challenged by varying
field conditions that arise in arable fields. State-of-the-art solutions for
autonomous navigation in such environments require expensive hardware such as
RTK-GNSS. This paper presents a robust crop row detection algorithm that
withstands such field variations using inexpensive cameras. Existing datasets
for crop row detection does not represent all the possible field variations. A
dataset of sugar beet images was created representing 11 field variations
comprised of multiple grow stages, light levels, varying weed densities, curved
crop rows and discontinuous crop rows. The proposed pipeline segments the crop
rows using a deep learning-based method and employs the predicted segmentation
mask for extraction of the central crop using a novel central crop row
selection algorithm. The novel crop row detection algorithm was tested for crop
row detection performance and the capability of visual servoing along a crop
row. The visual servoing-based navigation was tested on a realistic simulation
scenario with the real ground and plant textures. Our algorithm demonstrated
robust vision-based crop row detection in challenging field conditions
outperforming the baseline.
Related papers
- Real-time object detection and robotic manipulation for agriculture
using a YOLO-based learning approach [8.482182765640022]
This study presents a new framework that combines two separate architectures of convolutional neural networks (CNNs)
Crop images in a simulated environment are subjected to random rotations, cropping, brightness, and contrast adjustments to create augmented images for dataset generation.
The proposed method subsequently utilise the acquired image data via a visual geometry group model in order to reveal the grasping positions for the robotic manipulation.
arXiv Detail & Related papers (2024-01-28T22:30:50Z) - A Vision-Based Navigation System for Arable Fields [7.338061223686544]
Vision-based navigation systems in arable fields are an underexplored area in agricultural robot navigation.
Current solutions are often crop-specific and aimed to address limited individual conditions such as illumination or weed density.
This paper proposes a suite of deep learning-based perception algorithms using affordable vision sensors for vision-based navigation in arable fields.
arXiv Detail & Related papers (2023-09-21T12:01:59Z) - On the Generation of a Synthetic Event-Based Vision Dataset for
Navigation and Landing [69.34740063574921]
This paper presents a methodology for generating event-based vision datasets from optimal landing trajectories.
We construct sequences of photorealistic images of the lunar surface with the Planet and Asteroid Natural Scene Generation Utility.
We demonstrate that the pipeline can generate realistic event-based representations of surface features by constructing a dataset of 500 trajectories.
arXiv Detail & Related papers (2023-08-01T09:14:20Z) - Domain Adaptive Scene Text Detection via Subcategorization [45.580559833129165]
We study domain adaptive scene text detection, a largely neglected yet very meaningful task.
We design SCAST, a subcategory-aware self-training technique that mitigates the network overfitting and noisy pseudo labels.
SCAST achieves superior detection performance consistently across multiple public benchmarks.
arXiv Detail & Related papers (2022-12-01T09:15:43Z) - Vision based Crop Row Navigation under Varying Field Conditions in
Arable Fields [6.088167023055281]
We present a dataset for crop row detection with 11 field variations from Sugar Beet and Maize crops.
We also present a novel crop row detection algorithm for visual servoing in crop row fields.
arXiv Detail & Related papers (2022-09-28T11:23:34Z) - Towards Infield Navigation: leveraging simulated data for crop row
detection [6.088167023055281]
We suggest the utilization of small real-world datasets along with additional data generated by simulations to yield similar crop row detection performance as that of a model trained with a large real world dataset.
Our method could reach the performance of a deep learning based crop row detection model trained with real-world data by using 60% less labelled real-world data.
arXiv Detail & Related papers (2022-04-04T19:28:30Z) - Towards agricultural autonomy: crop row detection under varying field
conditions using deep learning [4.252146169134215]
This paper presents a novel metric to evaluate the robustness of deep learning based semantic segmentation approaches for crop row detection.
A dataset with ten main categories encountered under various field conditions was used for testing.
The effect on these conditions on the angular accuracy of crop row detection was compared.
arXiv Detail & Related papers (2021-09-16T23:12:08Z) - Potato Crop Stress Identification in Aerial Images using Deep
Learning-based Object Detection [60.83360138070649]
The paper presents an approach for analyzing aerial images of a potato crop using deep neural networks.
The main objective is to demonstrate automated spatial recognition of a healthy versus stressed crop at a plant level.
Experimental validation demonstrated the ability for distinguishing healthy and stressed plants in field images, achieving an average Dice coefficient of 0.74.
arXiv Detail & Related papers (2021-06-14T21:57:40Z) - Cycle and Semantic Consistent Adversarial Domain Adaptation for Reducing
Simulation-to-Real Domain Shift in LiDAR Bird's Eye View [110.83289076967895]
We present a BEV domain adaptation method based on CycleGAN that uses prior semantic classification in order to preserve the information of small objects of interest during the domain adaptation process.
The quality of the generated BEVs has been evaluated using a state-of-the-art 3D object detection framework at KITTI 3D Object Detection Benchmark.
arXiv Detail & Related papers (2021-04-22T12:47:37Z) - Transferable Active Grasping and Real Embodied Dataset [48.887567134129306]
We show how to search for feasible viewpoints for grasping by the use of hand-mounted RGB-D cameras.
A practical 3-stage transferable active grasping pipeline is developed, that is adaptive to unseen clutter scenes.
In our pipeline, we propose a novel mask-guided reward to overcome the sparse reward issue in grasping and ensure category-irrelevant behavior.
arXiv Detail & Related papers (2020-04-28T08:15:35Z) - Refined Plane Segmentation for Cuboid-Shaped Objects by Leveraging Edge
Detection [63.942632088208505]
We propose a post-processing algorithm to align the segmented plane masks with edges detected in the image.
This allows us to increase the accuracy of state-of-the-art approaches, while limiting ourselves to cuboid-shaped objects.
arXiv Detail & Related papers (2020-03-28T18:51:43Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.