A Real-time Low-cost Artificial Intelligence System for Autonomous
Spraying in Palm Plantations
- URL: http://arxiv.org/abs/2103.04132v1
- Date: Sat, 6 Mar 2021 15:05:14 GMT
- Title: A Real-time Low-cost Artificial Intelligence System for Autonomous
Spraying in Palm Plantations
- Authors: Zhenwang Qin, Wensheng Wang, Karl-Heinz Dammer, Leifeng Guo and Zhen
Cao
- Abstract summary: In precision crop protection, (target-orientated) object detection in image processing can help navigate Unmanned Aerial Vehicles (UAV, crop protection drones) to the right place to apply the pesticide.
We propose a solution based on a light deep neural network (DNN), called Ag-YOLO, which can make the crop protection UAV have the ability to target detection and autonomous operation.
- Score: 1.6799377888527687
- License: http://creativecommons.org/publicdomain/zero/1.0/
- Abstract: In precision crop protection, (target-orientated) object detection in image
processing can help navigate Unmanned Aerial Vehicles (UAV, crop protection
drones) to the right place to apply the pesticide. Unnecessary application of
non-target areas could be avoided. Deep learning algorithms dominantly use in
modern computer vision tasks which require high computing time, memory
footprint, and power consumption. Based on the Edge Artificial Intelligence, we
investigate the main three paths that lead to dealing with this problem,
including hardware accelerators, efficient algorithms, and model compression.
Finally, we integrate them and propose a solution based on a light deep neural
network (DNN), called Ag-YOLO, which can make the crop protection UAV have the
ability to target detection and autonomous operation. This solution is
restricted in size, cost, flexible, fast, and energy-effective. The hardware is
only 18 grams in weight and 1.5 watts in energy consumption, and the developed
DNN model needs only 838 kilobytes of disc space. We tested the developed
hardware and software in comparison to the tiny version of the state-of-art
YOLOv3 framework, known as YOLOv3-Tiny to detect individual palm in a
plantation. An average F1 score of 0.9205 at the speed of 36.5 frames per
second (in comparison to similar accuracy at 18 frames per second and 8.66
megabytes of the YOLOv3-Tiny algorithm) was reached. This developed detection
system is easily plugged into any machines already purchased as long as the
machines have USB ports and run Linux Operating System.
Related papers
- Real-Time Pedestrian Detection on IoT Edge Devices: A Lightweight Deep Learning Approach [1.4732811715354455]
This research explores implementing a lightweight deep learning model on Artificial Intelligence of Things (AIoT) edge devices.
An optimized You Only Look Once (YOLO) based DL model is deployed for real-time pedestrian detection.
The simulation results demonstrate that the optimized YOLO model can achieve real-time pedestrian detection, with a fast inference speed of 147 milliseconds, a frame rate of 2.3 frames per second, and an accuracy of 78%.
arXiv Detail & Related papers (2024-09-24T04:48:41Z) - A Deep Learning-based Pest Insect Monitoring System for Ultra-low Power Pocket-sized Drones [1.7945764007196348]
Smart farming and precision agriculture represent game-changer technologies for efficient and sustainable agribusiness.
Miniaturized palm-sized drones can act as flexible smart sensors inspecting crops, looking for early signs of potential pest outbreaking.
This work presents a novel vertically integrated solution featuring two ultra-low power System-on-Chips.
arXiv Detail & Related papers (2024-04-02T10:39:54Z) - Deep Neural Network Architecture Search for Accurate Visual Pose
Estimation aboard Nano-UAVs [69.19616451596342]
Miniaturized unmanned aerial vehicles (UAVs) are an emerging and trending topic.
We leverage a novel neural architecture search (NAS) technique to automatically identify several convolutional neural networks (CNNs) for a visual pose estimation task.
Our results improve the State-of-the-Art by reducing the in-field control error of 32% while achieving a real-time onboard inference-rate of 10Hz@10mW and 50Hz@90mW.
arXiv Detail & Related papers (2023-03-03T14:02:09Z) - ETAD: A Unified Framework for Efficient Temporal Action Detection [70.21104995731085]
Untrimmed video understanding such as temporal action detection (TAD) often suffers from the pain of huge demand for computing resources.
We build a unified framework for efficient end-to-end temporal action detection (ETAD)
ETAD achieves state-of-the-art performance on both THUMOS-14 and ActivityNet-1.3.
arXiv Detail & Related papers (2022-05-14T21:16:21Z) - Deep Learning for Real Time Satellite Pose Estimation on Low Power Edge
TPU [58.720142291102135]
In this paper we propose a pose estimation software exploiting neural network architectures.
We show how low power machine learning accelerators could enable Artificial Intelligence exploitation in space.
arXiv Detail & Related papers (2022-04-07T08:53:18Z) - Lightweight Multi-Drone Detection and 3D-Localization via YOLO [1.284647943889634]
We present and evaluate a method to perform real-time multiple drone detection and three-dimensional localization.
We use state-of-the-art tiny-YOLOv4 object detection algorithm and stereo triangulation.
Our computer vision approach eliminates the need for computationally expensive stereo matching algorithms.
arXiv Detail & Related papers (2022-02-18T09:41:23Z) - FPGA-optimized Hardware acceleration for Spiking Neural Networks [69.49429223251178]
This work presents the development of a hardware accelerator for an SNN, with off-line training, applied to an image recognition task.
The design targets a Xilinx Artix-7 FPGA, using in total around the 40% of the available hardware resources.
It reduces the classification time by three orders of magnitude, with a small 4.5% impact on the accuracy, if compared to its software, full precision counterpart.
arXiv Detail & Related papers (2022-01-18T13:59:22Z) - CNN-based Omnidirectional Object Detection for HermesBot Autonomous
Delivery Robot with Preliminary Frame Classification [53.56290185900837]
We propose an algorithm for optimizing a neural network for object detection using preliminary binary frame classification.
An autonomous mobile robot with 6 rolling-shutter cameras on the perimeter providing a 360-degree field of view was used as the experimental setup.
arXiv Detail & Related papers (2021-10-22T15:05:37Z) - Achieving Real-Time LiDAR 3D Object Detection on a Mobile Device [53.323878851563414]
We propose a compiler-aware unified framework incorporating network enhancement and pruning search with the reinforcement learning techniques.
Specifically, a generator Recurrent Neural Network (RNN) is employed to provide the unified scheme for both network enhancement and pruning search automatically.
The proposed framework achieves real-time 3D object detection on mobile devices with competitive detection performance.
arXiv Detail & Related papers (2020-12-26T19:41:15Z) - Accelerating Deep Learning Applications in Space [0.0]
We investigate the performance of CNN-based object detectors on constrained devices.
We take a closer look at the Single Shot MultiBox Detector (SSD) and Region-based Fully Convolutional Network (R-FCN)
The performance is measured in terms of inference time, memory consumption, and accuracy.
arXiv Detail & Related papers (2020-07-21T21:06:30Z) - Real-Time Apple Detection System Using Embedded Systems With Hardware
Accelerators: An Edge AI Application [1.3764085113103222]
The proposed study adapts YOLOv3-tiny architecture to detect small objects.
It shows the feasibility of deployment of the customized model on cheap and power-efficient embedded hardware.
The proposed embedded solution can be deployed on the unmanned ground vehicles to detect, count, and measure the size of the apples.
arXiv Detail & Related papers (2020-04-28T10:40:01Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.