PlantDet: A benchmark for Plant Detection in the Three-Rivers-Source
Region
- URL: http://arxiv.org/abs/2304.04963v3
- Date: Wed, 9 Aug 2023 09:15:52 GMT
- Title: PlantDet: A benchmark for Plant Detection in the Three-Rivers-Source
Region
- Authors: Huanhuan Li, Xuechao Zou, Yu-an Zhang, Jiangcai Zhaba, Guomei Li,
Lamao Yongga
- Abstract summary: We construct a dataset for Plant detection in the Three-River-Source region (PTRS)
It comprises 21 types, 6965 high-resolution images of 2160*3840 pixels, captured by diverse sensors and platforms, and featuring objects of varying shapes and sizes.
The PTRS presents us with challenges such as dense occlusion, varying leaf resolutions, and high feature similarity among plants, prompting us to develop a novel object detection network named PlantDet.
- Score: 4.676030127116814
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The Three-River-Source region is a highly significant natural reserve in
China that harbors a plethora of botanical resources. To meet the practical
requirements of botanical research and intelligent plant management, we
construct a dataset for Plant detection in the Three-River-Source region
(PTRS). It comprises 21 types, 6965 high-resolution images of 2160*3840 pixels,
captured by diverse sensors and platforms, and featuring objects of varying
shapes and sizes. The PTRS presents us with challenges such as dense occlusion,
varying leaf resolutions, and high feature similarity among plants, prompting
us to develop a novel object detection network named PlantDet. This network
employs a window-based efficient self-attention module (ST block) to generate
robust feature representation at multiple scales, improving the detection
efficiency for small and densely-occluded objects. Our experimental results
validate the efficacy of our proposed plant detection benchmark, with a
precision of 88.1%, a mean average precision (mAP) of 77.6%, and a higher
recall compared to the baseline. Additionally, our method effectively overcomes
the issue of missing small objects.
Related papers
- Investigation to answer three key questions concerning plant pest identification and development of a practical identification framework [2.388418486046813]
We develop an accurate, robust, and fast plant pest identification framework using 334K images.
Our two-stage plant pest identification framework achieved a highly practical performance of 91.0% and 88.5% in mean accuracy and macro F1 score.
arXiv Detail & Related papers (2024-07-25T12:49:24Z) - DETR Doesn't Need Multi-Scale or Locality Design [69.56292005230185]
This paper presents an improved DETR detector that maintains a "plain" nature.
It uses a single-scale feature map and global cross-attention calculations without specific locality constraints.
We show that two simple technologies are surprisingly effective within a plain design to compensate for the lack of multi-scale feature maps and locality constraints.
arXiv Detail & Related papers (2023-08-03T17:59:04Z) - Semantics-Aware Next-best-view Planning for Efficient Search and Detection of Task-relevant Plant Parts [3.9074818653555554]
To automate harvesting and de-leafing of tomato plants, it is important to search and detect the task-relevant plant parts.
Current active-vision algorithms cannot differentiate between relevant and irrelevant plant parts.
We propose a semantics-aware active-vision strategy that uses semantic information to identify the relevant plant parts.
arXiv Detail & Related papers (2023-06-16T12:22:19Z) - Eff-3DPSeg: 3D organ-level plant shoot segmentation using
annotation-efficient point clouds [1.5882586857953638]
We propose a novel weakly supervised framework, Eff-3DPSeg, for 3D plant shoot segmentation.
High-resolution point clouds of soybean were reconstructed using a low-cost photogrammetry system.
A weakly-supervised deep learning method was proposed for plant organ segmentation.
arXiv Detail & Related papers (2022-12-20T14:09:37Z) - PST: Plant segmentation transformer for 3D point clouds of rapeseed
plants at the podding stage [5.010317705589445]
deep learning network plant segmentation transformer (PST)
PST is composed of: (i) a dynamic voxel feature encoder (DVFE) to aggregate the point features with the raw spatial resolution; (ii) a dual window sets attention blocks to capture contextual information; and (iii) a dense feature propagation module to obtain the final dense point feature map.
Results: PST and PST-PointGroup (PG) achieved superior performance in semantic and instance segmentation tasks.
arXiv Detail & Related papers (2022-06-27T06:56:48Z) - SALISA: Saliency-based Input Sampling for Efficient Video Object
Detection [58.22508131162269]
We propose SALISA, a novel non-uniform SALiency-based Input SAmpling technique for video object detection.
We show that SALISA significantly improves the detection of small objects.
arXiv Detail & Related papers (2022-04-05T17:59:51Z) - A fast accurate fine-grain object detection model based on YOLOv4 deep
neural network [0.0]
Early identification and prevention of various plant diseases in commercial farms and orchards is a key feature of precision agriculture technology.
This paper presents a high-performance real-time fine-grain object detection framework that addresses several obstacles in plant disease detection.
The proposed model is built on an improved version of the You Only Look Once (YOLOv4) algorithm.
arXiv Detail & Related papers (2021-10-30T17:56:13Z) - AdaZoom: Adaptive Zoom Network for Multi-Scale Object Detection in Large
Scenes [57.969186815591186]
Detection in large-scale scenes is a challenging problem due to small objects and extreme scale variation.
We propose a novel Adaptive Zoom (AdaZoom) network as a selective magnifier with flexible shape and focal length to adaptively zoom the focus regions for object detection.
arXiv Detail & Related papers (2021-06-19T03:30:22Z) - Potato Crop Stress Identification in Aerial Images using Deep
Learning-based Object Detection [60.83360138070649]
The paper presents an approach for analyzing aerial images of a potato crop using deep neural networks.
The main objective is to demonstrate automated spatial recognition of a healthy versus stressed crop at a plant level.
Experimental validation demonstrated the ability for distinguishing healthy and stressed plants in field images, achieving an average Dice coefficient of 0.74.
arXiv Detail & Related papers (2021-06-14T21:57:40Z) - A CNN Approach to Simultaneously Count Plants and Detect Plantation-Rows
from UAV Imagery [56.10033255997329]
We propose a novel deep learning method based on a Convolutional Neural Network (CNN)
It simultaneously detects and geolocates plantation-rows while counting its plants considering highly-dense plantation configurations.
The proposed method achieved state-of-the-art performance for counting and geolocating plants and plant-rows in UAV images from different types of crops.
arXiv Detail & Related papers (2020-12-31T18:51:17Z) - Two-View Fine-grained Classification of Plant Species [66.75915278733197]
We propose a novel method based on a two-view leaf image representation and a hierarchical classification strategy for fine-grained recognition of plant species.
A deep metric based on Siamese convolutional neural networks is used to reduce the dependence on a large number of training samples and make the method scalable to new plant species.
arXiv Detail & Related papers (2020-05-18T21:57:47Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.