Towards Scalable Insect Monitoring: Ultra-Lightweight CNNs as On-Device Triggers for Insect Camera Traps
- URL: http://arxiv.org/abs/2411.14467v1
- Date: Mon, 18 Nov 2024 15:46:39 GMT
- Title: Towards Scalable Insect Monitoring: Ultra-Lightweight CNNs as On-Device Triggers for Insect Camera Traps
- Authors: Ross Gardiner, Sareh Rowands, Benno I. Simmons,
- Abstract summary: Camera traps have emerged as a way to achieve automated, scalable biodiversity monitoring.
The passive infrared (PIR) sensors that trigger camera traps are poorly suited for detecting small, fast-moving ectotherms such as insects.
This study proposes an alternative to the PIR trigger: ultra-lightweight convolutional neural networks running on low-powered hardware.
- Score: 0.10713888959520207
- License:
- Abstract: Camera traps, combined with AI, have emerged as a way to achieve automated, scalable biodiversity monitoring. However, the passive infrared (PIR) sensors that trigger camera traps are poorly suited for detecting small, fast-moving ectotherms such as insects. Insects comprise over half of all animal species and are key components of ecosystems and agriculture. The need for an appropriate and scalable insect camera trap is critical in the wake of concerning reports of declines in insect populations. This study proposes an alternative to the PIR trigger: ultra-lightweight convolutional neural networks running on low-powered hardware to detect insects in a continuous stream of captured images. We train a suite of models to distinguish insect images from backgrounds. Our design achieves zero latency between trigger and image capture. Our models are rigorously tested and achieve high accuracy ranging from 91.8% to 96.4% AUC on validation data and >87% AUC on data from distributions unseen during training. The high specificity of our models ensures minimal saving of false positive images, maximising deployment storage efficiency. High recall scores indicate a minimal false negative rate, maximising insect detection. Further analysis with saliency maps shows the learned representation of our models to be robust, with low reliance on spurious background features. Our system is also shown to operate deployed on off-the-shelf, low-powered microcontroller units, consuming a maximum power draw of less than 300mW. This enables longer deployment times using cheap and readily available battery components. Overall we offer a step change in the cost, efficiency and scope of insect monitoring. Solving the challenging trigger problem, we demonstrate a system which can be deployed for far longer than existing designs and budgets power and bandwidth effectively, moving towards a generic insect camera trap.
Related papers
- Effective and Efficient Adversarial Detection for Vision-Language Models via A Single Vector [97.92369017531038]
We build a new laRge-scale Adervsarial images dataset with Diverse hArmful Responses (RADAR)
We then develop a novel iN-time Embedding-based AdveRSarial Image DEtection (NEARSIDE) method, which exploits a single vector that distilled from the hidden states of Visual Language Models (VLMs) to achieve the detection of adversarial images against benign ones in the input.
arXiv Detail & Related papers (2024-10-30T10:33:10Z) - CamLoPA: A Hidden Wireless Camera Localization Framework via Signal Propagation Path Analysis [59.86280992504629]
CamLoPA is a training-free wireless camera detection and localization framework.
It operates with minimal activity space constraints using low-cost commercial-off-the-shelf (COTS) devices.
It achieves 95.37% snooping camera detection accuracy and an average localization error of 17.23, under the significantly reduced activity space requirements.
arXiv Detail & Related papers (2024-09-23T16:23:50Z) - Multisensor Data Fusion for Automatized Insect Monitoring (KInsecta) [32.57872751877726]
This paper presents a multisensor approach that uses AI-based data fusion for insect classification.
The system is designed as low-cost setup and consists of a camera module and an optical wing beat sensor.
First tests on a small very unbalanced data set with 7 species show promising results for species classification.
arXiv Detail & Related papers (2024-04-29T08:46:43Z) - Removing Human Bottlenecks in Bird Classification Using Camera Trap
Images and Deep Learning [0.14746127876003345]
Monitoring bird populations is essential for ecologists.
Technology such as camera traps, acoustic monitors and drones provide methods for non-invasive monitoring.
There are two main problems with using camera traps for monitoring: a) cameras generate many images, making it difficult to process and analyse the data in a timely manner.
In this paper, we outline an approach for overcoming these issues by utilising deep learning for real-time classi-fication of bird species.
arXiv Detail & Related papers (2023-05-03T13:04:39Z) - Evaluation of the potential of Near Infrared Hyperspectral Imaging for
monitoring the invasive brown marmorated stink bug [53.682955739083056]
The brown marmorated stink bug (BMSB), Halyomorpha halys, is an invasive insect pest of global importance that damages several crops.
The present study consists in a preliminary evaluation at the laboratory level of Near Infrared Hyperspectral Imaging (NIR-HSI) as a possible technology to detect BMSB specimens.
arXiv Detail & Related papers (2023-01-19T11:37:20Z) - Fewer is More: Efficient Object Detection in Large Aerial Images [59.683235514193505]
This paper presents an Objectness Activation Network (OAN) to help detectors focus on fewer patches but achieve more efficient inference and more accurate results.
Using OAN, all five detectors acquire more than 30.0% speed-up on three large-scale aerial image datasets.
We extend our OAN to driving-scene object detection and 4K video object detection, boosting the detection speed by 112.1% and 75.0%, respectively.
arXiv Detail & Related papers (2022-12-26T12:49:47Z) - Motion Informed Object Detection of Small Insects in Time-lapse Camera
Recordings [1.3965477771846408]
We present a method pipeline for detecting insects in time-lapse RGB images.
Motion-Informed-Enhancement technique uses motion and colors to enhance insects in images.
The method improves the deep learning object detectors You Only Look Once (YOLO) and Faster Region-based CNN (Faster R-CNN)
arXiv Detail & Related papers (2022-12-01T10:54:06Z) - A Multi-Stage model based on YOLOv3 for defect detection in PV panels
based on IR and Visible Imaging by Unmanned Aerial Vehicle [65.99880594435643]
We propose a novel model to detect panel defects on aerial images captured by unmanned aerial vehicle.
The model combines detections of panels and defects to refine its accuracy.
The proposed model has been validated on two big PV plants in the south of Italy.
arXiv Detail & Related papers (2021-11-23T08:04:32Z) - Towards Adversarial Patch Analysis and Certified Defense against Crowd
Counting [61.99564267735242]
Crowd counting has drawn much attention due to its importance in safety-critical surveillance systems.
Recent studies have demonstrated that deep neural network (DNN) methods are vulnerable to adversarial attacks.
We propose a robust attack strategy called Adversarial Patch Attack with Momentum to evaluate the robustness of crowd counting models.
arXiv Detail & Related papers (2021-04-22T05:10:55Z) - Enhancing LGMD's Looming Selectivity for UAV with Spatial-temporal
Distributed Presynaptic Connections [5.023891066282676]
In nature, flying insects with simple visual systems demonstrate their remarkable ability to navigate and avoid collision in complex environments.
As a flying insect's visual neuron, LGMD is considered to be an ideal basis for building UAV's collision detecting system.
Existing LGMD models cannot distinguish looming clearly from other visual cues such as complex background movements.
We propose a new model implementing distributed spatial-temporal synaptic interactions.
arXiv Detail & Related papers (2020-05-09T09:15:02Z) - A Time-Delay Feedback Neural Network for Discriminating Small,
Fast-Moving Targets in Complex Dynamic Environments [8.645725394832969]
Discriminating small moving objects within complex visual environments is a significant challenge for autonomous micro robots.
We propose an STMD-based neural network with feedback connection (Feedback STMD), where the network output is temporally delayed, then fed back to the lower layers to mediate neural responses.
arXiv Detail & Related papers (2019-12-29T03:10:36Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.