Detection of On-Ground Chestnuts Using Artificial Intelligence Toward Automated Picking
- URL: http://arxiv.org/abs/2602.14140v1
- Date: Sun, 15 Feb 2026 13:28:23 GMT
- Title: Detection of On-Ground Chestnuts Using Artificial Intelligence Toward Automated Picking
- Authors: Kaixuan Fang, Yuzhen Lu, Xinyang Mu,
- Abstract summary: Traditional mechanized chestnut harvesting is too costly for small producers.<n> Accurate, reliable detection of chestnuts on the orchard floor is crucial for developing low-cost, vision-guided automated harvesting technology.<n>This study collected 319 images of chestnuts on the orchard floor, containing 6524 annotated chestnuts.
- Score: 0.09176056742068812
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Traditional mechanized chestnut harvesting is too costly for small producers, non-selective, and prone to damaging nuts. Accurate, reliable detection of chestnuts on the orchard floor is crucial for developing low-cost, vision-guided automated harvesting technology. However, developing a reliable chestnut detection system faces challenges in complex environments with shading, varying natural light conditions, and interference from weeds, fallen leaves, stones, and other foreign on-ground objects, which have remained unaddressed. This study collected 319 images of chestnuts on the orchard floor, containing 6524 annotated chestnuts. A comprehensive set of 29 state-of-the-art real-time object detectors, including 14 in the YOLO (v11-13) and 15 in the RT-DETR (v1-v4) families at varied model scales, was systematically evaluated through replicated modeling experiments for chestnut detection. Experimental results show that the YOLOv12m model achieves the best mAP@0.5 of 95.1% among all the evaluated models, while the RT-DETRv2-R101 was the most accurate variant among RT-DETR models, with mAP@0.5 of 91.1%. In terms of mAP@[0.5:0.95], the YOLOv11x model achieved the best accuracy of 80.1%. All models demonstrate significant potential for real-time chestnut detection, and YOLO models outperformed RT-DETR models in terms of both detection accuracy and inference, making them better suited for on-board deployment. Both the dataset and software programs in this study have been made publicly available at https://github.com/AgFood-Sensing-and-Intelligence-Lab/ChestnutDetection.
Related papers
- BloomNet: Exploring Single vs. Multiple Object Annotation for Flower Recognition Using YOLO Variants [0.0]
This paper benchmarks several YOLO architectures such as YOLOv5s, YOLOv8n/s/m, and YOLOv12n for object detection under two annotation regimes.<n>The FloralSix dataset, comprising 2,816 high-resolution photos of six different flower species, is also introduced.
arXiv Detail & Related papers (2026-02-20T19:47:45Z) - A Comparative Benchmark of Real-time Detectors for Blueberry Detection towards Precision Orchard Management [2.667064587590596]
This study presents a novel comparative benchmark analysis of advanced real-time object detectors.<n>This dataset comprises 661 canopy images collected with smartphones during the 2022-2023 seasons.<n>Among the YOLO models, YOLOv12m achieved the best accuracy with a mAP@50 of 93.3%.
arXiv Detail & Related papers (2025-09-24T21:42:24Z) - From Field to Drone: Domain Drift Tolerant Automated Multi-Species and Damage Plant Semantic Segmentation for Herbicide Trials [1.0483690290582848]
We present a general-purpose self-supervised visual model with hierarchical inference based on botanical taxonomy.<n>The model significantly improved species identification (F1-score: 0.52 to 0.85, R-squared: 0.75 to 0.98) and damage classification (F1-score: 0.28 to 0.44, R-squared: 0.71 to 0.87) over prior models.<n>It is now deployed in BASF's phenotyping pipeline, enabling large-scale, automated crop and weed monitoring across diverse geographies.
arXiv Detail & Related papers (2025-08-11T00:08:42Z) - WeedVision: Multi-Stage Growth and Classification of Weeds using DETR and RetinaNet for Precision Agriculture [0.0]
This research uses object detection models to identify and classify 16 weed species of economic concern across 174 classes.<n>A robust dataset comprising 203,567 images was developed, meticulously labeled by species and growth stage.<n>RetinaNet demonstrated superior performance, achieving a mean Average Precision (mAP) of 0.907 on the training set and 0.904 on the test set.
arXiv Detail & Related papers (2025-02-16T20:49:22Z) - Assessing the Capability of YOLO- and Transformer-based Object Detectors for Real-time Weed Detection [0.0]
All available models of YOLOv8, YOLOv9, YOLOv10, and RT-DETR are trained and evaluated with images from a real field situation.<n>The results demonstrate that while all models perform equally well in the metrics evaluated, the YOLOv9 models stand out in terms of their strong recall scores.<n> RT-DETR models, especially RT-DETR-l, excel in precision with reaching 82.44 % on dataset 1 and 81.46 % in dataset 2.
arXiv Detail & Related papers (2025-01-29T02:39:57Z) - Object Detection for Medical Image Analysis: Insights from the RT-DETR Model [40.593685087097995]
This paper focuses on the application of a novel detection framework based on the RT-DETR model for analyzing intricate image data.<n>The proposed RT-DETR model, built on a Transformer-based architecture, excels at processing high-dimensional and complex visual data with enhanced robustness and accuracy.
arXiv Detail & Related papers (2025-01-27T20:02:53Z) - Robust Fine-tuning of Zero-shot Models via Variance Reduction [56.360865951192324]
When fine-tuning zero-shot models, our desideratum is for the fine-tuned model to excel in both in-distribution (ID) and out-of-distribution (OOD)
We propose a sample-wise ensembling technique that can simultaneously attain the best ID and OOD accuracy without the trade-offs.
arXiv Detail & Related papers (2024-11-11T13:13:39Z) - CRTRE: Causal Rule Generation with Target Trial Emulation Framework [47.2836994469923]
We introduce a novel method called causal rule generation with target trial emulation framework (CRTRE)
CRTRE applies randomize trial design principles to estimate the causal effect of association rules.
We then incorporate such association rules for the downstream applications such as prediction of disease onsets.
arXiv Detail & Related papers (2024-11-10T02:40:06Z) - Diffusion Soup: Model Merging for Text-to-Image Diffusion Models [90.01635703779183]
We present Diffusion Soup, a compartmentalization method for Text-to-Image Generation that averages the weights of diffusion models trained on sharded data.
By construction, our approach enables training-free continual learning and unlearning with no additional memory or inference costs.
arXiv Detail & Related papers (2024-06-12T17:16:16Z) - Exploring the Effectiveness of Dataset Synthesis: An application of
Apple Detection in Orchards [68.95806641664713]
We explore the usability of Stable Diffusion 2.1-base for generating synthetic datasets of apple trees for object detection.
We train a YOLOv5m object detection model to predict apples in a real-world apple detection dataset.
Results demonstrate that the model trained on generated data is slightly underperforming compared to a baseline model trained on real-world images.
arXiv Detail & Related papers (2023-06-20T09:46:01Z) - Uncertainty-inspired Open Set Learning for Retinal Anomaly
Identification [71.06194656633447]
We establish an uncertainty-inspired open-set (UIOS) model, which was trained with fundus images of 9 retinal conditions.
Our UIOS model with thresholding strategy achieved an F1 score of 99.55%, 97.01% and 91.91% for the internal testing set.
UIOS correctly predicted high uncertainty scores, which would prompt the need for a manual check in the datasets of non-target categories retinal diseases, low-quality fundus images, and non-fundus images.
arXiv Detail & Related papers (2023-04-08T10:47:41Z) - A CNN Approach to Simultaneously Count Plants and Detect Plantation-Rows
from UAV Imagery [56.10033255997329]
We propose a novel deep learning method based on a Convolutional Neural Network (CNN)
It simultaneously detects and geolocates plantation-rows while counting its plants considering highly-dense plantation configurations.
The proposed method achieved state-of-the-art performance for counting and geolocating plants and plant-rows in UAV images from different types of crops.
arXiv Detail & Related papers (2020-12-31T18:51:17Z) - Robust Out-of-distribution Detection for Neural Networks [51.19164318924997]
We show that existing detection mechanisms can be extremely brittle when evaluating on in-distribution and OOD inputs.
We propose an effective algorithm called ALOE, which performs robust training by exposing the model to both adversarially crafted inlier and outlier examples.
arXiv Detail & Related papers (2020-03-21T17:46:28Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.