MaizeStandCounting (MaSC): Automated and Accurate Maize Stand Counting from UAV Imagery Using Image Processing and Deep Learning
- URL: http://arxiv.org/abs/2510.07580v1
- Date: Wed, 08 Oct 2025 21:56:27 GMT
- Title: MaizeStandCounting (MaSC): Automated and Accurate Maize Stand Counting from UAV Imagery Using Image Processing and Deep Learning
- Authors: Dewi Endah Kharismawati, Toni Kazic,
- Abstract summary: We present MaizeStandCounting (MaSC), a robust algorithm for automated maize seedling stand counting from RGB imagery captured by low-cost UAVs.<n>MaSC operates in two modes: (1) mosaic images divided into patches, and (2) raw video frames aligned using homography matrices.<n>MaSC distinguishes maize from weeds and other vegetation, then performs row and range segmentation based on the spatial distribution of detections to produce precise row-wise stand counts.
- Score: 0.28647133890966986
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: Accurate maize stand counts are essential for crop management and research, informing yield prediction, planting density optimization, and early detection of germination issues. Manual counting is labor-intensive, slow, and error-prone, especially across large or variable fields. We present MaizeStandCounting (MaSC), a robust algorithm for automated maize seedling stand counting from RGB imagery captured by low-cost UAVs and processed on affordable hardware. MaSC operates in two modes: (1) mosaic images divided into patches, and (2) raw video frames aligned using homography matrices. Both modes use a lightweight YOLOv9 model trained to detect maize seedlings from V2-V10 growth stages. MaSC distinguishes maize from weeds and other vegetation, then performs row and range segmentation based on the spatial distribution of detections to produce precise row-wise stand counts. Evaluation against in-field manual counts from our 2024 summer nursery showed strong agreement with ground truth (R^2= 0.616 for mosaics, R^2 = 0.906 for raw frames). MaSC processed 83 full-resolution frames in 60.63 s, including inference and post-processing, highlighting its potential for real-time operation. These results demonstrate MaSC's effectiveness as a scalable, low-cost, and accurate tool for automated maize stand counting in both research and production environments.
Related papers
- Maize Seedling Detection Dataset (MSDD): A Curated High-Resolution RGB Dataset for Seedling Maize Detection and Benchmarking with YOLOv9, YOLO11, YOLOv12 and Faster-RCNN [0.28647133890966986]
Stand counting determines how many plants germinated, guiding timely decisions such as replanting or adjusting inputs.<n>We introduce MSDD, a high-quality aerial image dataset for maize seedling stand counting, with applications in early-season crop monitoring, yield prediction, and in-field management.<n> MSDD contains three classes-single, double, and triple plants-capturing diverse growth stages, planting setups, soil types, lighting conditions, camera angles, and densities, ensuring robustness for real-world use.
arXiv Detail & Related papers (2025-09-18T17:41:59Z) - MaizeEar-SAM: Zero-Shot Maize Ear Phenotyping [0.9659487278134938]
Grain yield per acre is calculated by multiplying the number of plants per acre, ears per plant, number of kernels per ear, and the average kernel weight.<n>Traditional manual methods for measuring these two traits are time-consuming, limiting large-scale data collection.<n>Our approach successfully identifies the number of kernels per row across a wide range of maize ears.
arXiv Detail & Related papers (2025-02-19T03:18:29Z) - Retrieval Augmented Recipe Generation [96.43285670458803]
We propose a retrieval augmented large multimodal model for recipe generation.<n>It retrieves recipes semantically related to the image from an existing datastore as a supplement.<n>It calculates the consistency among generated recipe candidates, which use different retrieval recipes as context for generation.
arXiv Detail & Related papers (2024-11-13T15:58:50Z) - V2M: Visual 2-Dimensional Mamba for Image Representation Learning [68.51380287151927]
Mamba has garnered widespread attention due to its flexible design and efficient hardware performance to process 1D sequences.
Recent studies have attempted to apply Mamba to the visual domain by flattening 2D images into patches and then regarding them as a 1D sequence.
We propose a Visual 2-Dimensional Mamba model as a complete solution, which directly processes image tokens in the 2D space.
arXiv Detail & Related papers (2024-10-14T11:11:06Z) - HarvestNet: A Dataset for Detecting Smallholder Farming Activity Using
Harvest Piles and Remote Sensing [50.4506590177605]
HarvestNet is a dataset for mapping the presence of farms in the Ethiopian regions of Tigray and Amhara during 2020-2023.
We introduce a new approach based on the detection of harvest piles characteristic of many smallholder systems.
We conclude that remote sensing of harvest piles can contribute to more timely and accurate cropland assessments in food insecure regions.
arXiv Detail & Related papers (2023-08-23T11:03:28Z) - Stain-invariant self supervised learning for histopathology image
analysis [74.98663573628743]
We present a self-supervised algorithm for several classification tasks within hematoxylin and eosin stained images of breast cancer.
Our method achieves the state-of-the-art performance on several publicly available breast cancer datasets.
arXiv Detail & Related papers (2022-11-14T18:16:36Z) - Transferring learned patterns from ground-based field imagery to predict
UAV-based imagery for crop and weed semantic segmentation in precision crop
farming [3.95486899327898]
We have developed a deep convolutional network that enables to predict both field and aerial images from UAVs for weed segmentation.
The network learning process is visualized by feature maps at shallow and deep layers.
The study shows that the developed deep convolutional neural network could be used to classify weeds from both field and aerial images.
arXiv Detail & Related papers (2022-10-20T19:25:06Z) - End-to-end deep learning for directly estimating grape yield from
ground-based imagery [53.086864957064876]
This study demonstrates the application of proximal imaging combined with deep learning for yield estimation in vineyards.
Three model architectures were tested: object detection, CNN regression, and transformer models.
The study showed the applicability of proximal imaging and deep learning for prediction of grapevine yield on a large scale.
arXiv Detail & Related papers (2022-08-04T01:34:46Z) - Image analysis for automatic measurement of crustose lichens [0.0]
Lichens are frequently used as age estimators, especially in recent geological deposits and archaeological structures.
Current non-automated manual lichen and measurement is a time-consuming and laborious process.
This work presents a workflow and set of image acquisition and processing tools to efficiently identify lichen thalli in flat rocky surfaces.
arXiv Detail & Related papers (2022-03-01T23:11:59Z) - WheatNet: A Lightweight Convolutional Neural Network for High-throughput
Image-based Wheat Head Detection and Counting [12.735055892742647]
We propose a novel deep learning framework to accurately and efficiently count wheat heads to aid in the gathering of real-time data for decision making.
We call our model WheatNet and show that our approach is robust and accurate for a wide range of environmental conditions of the wheat field.
Our proposed method achieves an MAE and RMSE of 3.85 and 5.19 in our wheat head counting task, respectively, while having significantly fewer parameters when compared to other state-of-the-art methods.
arXiv Detail & Related papers (2021-03-17T02:38:58Z) - A CNN Approach to Simultaneously Count Plants and Detect Plantation-Rows
from UAV Imagery [56.10033255997329]
We propose a novel deep learning method based on a Convolutional Neural Network (CNN)
It simultaneously detects and geolocates plantation-rows while counting its plants considering highly-dense plantation configurations.
The proposed method achieved state-of-the-art performance for counting and geolocating plants and plant-rows in UAV images from different types of crops.
arXiv Detail & Related papers (2020-12-31T18:51:17Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.