Vision based Crop Row Navigation under Varying Field Conditions in
Arable Fields
- URL: http://arxiv.org/abs/2209.14003v1
- Date: Wed, 28 Sep 2022 11:23:34 GMT
- Title: Vision based Crop Row Navigation under Varying Field Conditions in
Arable Fields
- Authors: Rajitha de Silva, Grzegorz Cielniak, Junfeng Gao
- Abstract summary: We present a dataset for crop row detection with 11 field variations from Sugar Beet and Maize crops.
We also present a novel crop row detection algorithm for visual servoing in crop row fields.
- Score: 6.088167023055281
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Accurate crop row detection is often challenged by the varying field
conditions present in real-world arable fields. Traditional colour based
segmentation is unable to cater for all such variations. The lack of
comprehensive datasets in agricultural environments limits the researchers from
developing robust segmentation models to detect crop rows. We present a dataset
for crop row detection with 11 field variations from Sugar Beet and Maize
crops. We also present a novel crop row detection algorithm for visual servoing
in crop row fields. Our algorithm can detect crop rows against varying field
conditions such as curved crop rows, weed presence, discontinuities, growth
stages, tramlines, shadows and light levels. Our method only uses RGB images
from a front-mounted camera on a Husky robot to predict crop rows. Our method
outperformed the classic colour based crop row detection baseline. Dense weed
presence within inter-row space and discontinuities in crop rows were the most
challenging field conditions for our crop row detection algorithm. Our method
can detect the end of the crop row and navigate the robot towards the headland
area when it reaches the end of the crop row.
Related papers
- Agtech Framework for Cranberry-Ripening Analysis Using Vision Foundation Models [1.5728609542259502]
We develop a framework for characterizing the ripening process of cranberry crops using aerial and ground imaging.
This work is the first of its kind and has future impact for cranberries and for other crops including wine grapes, olives, blueberries, and maize.
arXiv Detail & Related papers (2024-12-12T22:03:33Z) - Deformable Neural Radiance Fields using RGB and Event Cameras [65.40527279809474]
We develop a novel method to model the deformable neural radiance fields using RGB and event cameras.
The proposed method uses the asynchronous stream of events and sparse RGB frames.
Experiments conducted on both realistically rendered graphics and real-world datasets demonstrate a significant benefit of the proposed method.
arXiv Detail & Related papers (2023-09-15T14:19:36Z) - HarvestNet: A Dataset for Detecting Smallholder Farming Activity Using
Harvest Piles and Remote Sensing [50.4506590177605]
HarvestNet is a dataset for mapping the presence of farms in the Ethiopian regions of Tigray and Amhara during 2020-2023.
We introduce a new approach based on the detection of harvest piles characteristic of many smallholder systems.
We conclude that remote sensing of harvest piles can contribute to more timely and accurate cropland assessments in food insecure regions.
arXiv Detail & Related papers (2023-08-23T11:03:28Z) - Leaving the Lines Behind: Vision-Based Crop Row Exit for Agricultural
Robot Navigation [6.088167023055281]
The proposed method could reach the end of the crop row and then navigate into the headland completely leaving behind the crop row with an error margin of 50 cm.
arXiv Detail & Related papers (2023-06-09T13:02:31Z) - Deep learning-based Crop Row Detection for Infield Navigation of
Agri-Robots [10.386591972977207]
This paper presents a robust crop row detection algorithm that withstands field variations using inexpensive cameras.
A dataset of sugar beet images was created representing 11 field variations comprised of multiple grow stages, light levels, varying weed densities, curved crop rows and discontinuous crop rows.
Our algorithm demonstrated robust vision-based crop row detection in challenging field conditions outperforming the baseline.
arXiv Detail & Related papers (2022-09-09T12:47:24Z) - End-to-end deep learning for directly estimating grape yield from
ground-based imagery [53.086864957064876]
This study demonstrates the application of proximal imaging combined with deep learning for yield estimation in vineyards.
Three model architectures were tested: object detection, CNN regression, and transformer models.
The study showed the applicability of proximal imaging and deep learning for prediction of grapevine yield on a large scale.
arXiv Detail & Related papers (2022-08-04T01:34:46Z) - Towards Infield Navigation: leveraging simulated data for crop row
detection [6.088167023055281]
We suggest the utilization of small real-world datasets along with additional data generated by simulations to yield similar crop row detection performance as that of a model trained with a large real world dataset.
Our method could reach the performance of a deep learning based crop row detection model trained with real-world data by using 60% less labelled real-world data.
arXiv Detail & Related papers (2022-04-04T19:28:30Z) - Towards agricultural autonomy: crop row detection under varying field
conditions using deep learning [4.252146169134215]
This paper presents a novel metric to evaluate the robustness of deep learning based semantic segmentation approaches for crop row detection.
A dataset with ten main categories encountered under various field conditions was used for testing.
The effect on these conditions on the angular accuracy of crop row detection was compared.
arXiv Detail & Related papers (2021-09-16T23:12:08Z) - Potato Crop Stress Identification in Aerial Images using Deep
Learning-based Object Detection [60.83360138070649]
The paper presents an approach for analyzing aerial images of a potato crop using deep neural networks.
The main objective is to demonstrate automated spatial recognition of a healthy versus stressed crop at a plant level.
Experimental validation demonstrated the ability for distinguishing healthy and stressed plants in field images, achieving an average Dice coefficient of 0.74.
arXiv Detail & Related papers (2021-06-14T21:57:40Z) - Refined Plane Segmentation for Cuboid-Shaped Objects by Leveraging Edge
Detection [63.942632088208505]
We propose a post-processing algorithm to align the segmented plane masks with edges detected in the image.
This allows us to increase the accuracy of state-of-the-art approaches, while limiting ourselves to cuboid-shaped objects.
arXiv Detail & Related papers (2020-03-28T18:51:43Z) - Agriculture-Vision: A Large Aerial Image Database for Agricultural
Pattern Analysis [110.30849704592592]
We present Agriculture-Vision: a large-scale aerial farmland image dataset for semantic segmentation of agricultural patterns.
Each image consists of RGB and Near-infrared (NIR) channels with resolution as high as 10 cm per pixel.
We annotate nine types of field anomaly patterns that are most important to farmers.
arXiv Detail & Related papers (2020-01-05T20:19:33Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.