Motion Informed Object Detection of Small Insects in Time-lapse Camera
Recordings
- URL: http://arxiv.org/abs/2212.00423v2
- Date: Thu, 29 Jun 2023 15:01:00 GMT
- Title: Motion Informed Object Detection of Small Insects in Time-lapse Camera
Recordings
- Authors: Kim Bjerge, Carsten Eie Frigaard and Henrik Karstoft
- Abstract summary: We present a method pipeline for detecting insects in time-lapse RGB images.
Motion-Informed-Enhancement technique uses motion and colors to enhance insects in images.
The method improves the deep learning object detectors You Only Look Once (YOLO) and Faster Region-based CNN (Faster R-CNN)
- Score: 1.3965477771846408
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Insects as pollinators play a crucial role in ecosystem management and world
food production. However, insect populations are declining, calling for
efficient methods of insect monitoring. Existing methods analyze video or
time-lapse images of insects in nature, but the analysis is challenging since
insects are small objects in complex and dynamic scenes of natural vegetation.
In this work, we provide a dataset of primary honeybees visiting three
different plant species during two months of the summer period. The dataset
consists of 107,387 annotated time-lapse images from multiple cameras,
including 9,423 annotated insects. We present a method pipeline for detecting
insects in time-lapse RGB images. The pipeline consists of a two-step process.
Firstly, the time-lapse RGB images are preprocessed to enhance insects in the
images. This Motion-Informed-Enhancement technique uses motion and colors to
enhance insects in images. Secondly, the enhanced images are subsequently fed
into a Convolutional Neural network (CNN) object detector. The method improves
the deep learning object detectors You Only Look Once (YOLO) and Faster
Region-based CNN (Faster R-CNN). Using Motion-Informed-Enhancement, the
YOLO-detector improves the average micro F1-score from 0.49 to 0.71, and the
Faster R-CNN-detector improves the average micro F1-score from 0.32 to 0.56 on
the dataset. Our dataset and proposed method provide a step forward to automate
the time-lapse camera monitoring of flying insects. The dataset is published
on: https://vision.eng.au.dk/mie/
Related papers
- Towards Scalable Insect Monitoring: Ultra-Lightweight CNNs as On-Device Triggers for Insect Camera Traps [0.10713888959520207]
Camera traps have emerged as a way to achieve automated, scalable biodiversity monitoring.
The passive infrared (PIR) sensors that trigger camera traps are poorly suited for detecting small, fast-moving ectotherms such as insects.
This study proposes an alternative to the PIR trigger: ultra-lightweight convolutional neural networks running on low-powered hardware.
arXiv Detail & Related papers (2024-11-18T15:46:39Z) - Multiview Aerial Visual Recognition (MAVREC): Can Multi-view Improve
Aerial Visual Perception? [57.77643186237265]
We present Multiview Aerial Visual RECognition or MAVREC, a video dataset where we record synchronized scenes from different perspectives.
MAVREC consists of around 2.5 hours of industry-standard 2.7K resolution video sequences, more than 0.5 million frames, and 1.1 million annotated bounding boxes.
This makes MAVREC the largest ground and aerial-view dataset, and the fourth largest among all drone-based datasets.
arXiv Detail & Related papers (2023-12-07T18:59:14Z) - Insect-Foundation: A Foundation Model and Large-scale 1M Dataset for Visual Insect Understanding [15.383106771910274]
Current machine vision model requires a large volume of data to achieve high performance.
We introduce a novel "Insect-1M" dataset, a game-changing resource poised to revolutionize insect-related foundation model training.
Covering a vast spectrum of insect species, our dataset, including 1 million images with dense identification labels of taxonomy hierarchy and insect descriptions, offers a panoramic view of entomology.
arXiv Detail & Related papers (2023-11-26T06:17:29Z) - Automated Visual Monitoring of Nocturnal Insects with Light-based Camera
Traps [9.274371635733836]
We present two datasets of nocturnal insects, especially moths as a subset of Lepidoptera, photographed in Central Europe.
One dataset, the EU-Moths dataset, was captured manually by citizen scientists and contains species annotations for 200 different species.
The second dataset consists of more than 27,000 images captured on 95 nights.
arXiv Detail & Related papers (2023-07-28T09:31:36Z) - Fewer is More: Efficient Object Detection in Large Aerial Images [59.683235514193505]
This paper presents an Objectness Activation Network (OAN) to help detectors focus on fewer patches but achieve more efficient inference and more accurate results.
Using OAN, all five detectors acquire more than 30.0% speed-up on three large-scale aerial image datasets.
We extend our OAN to driving-scene object detection and 4K video object detection, boosting the detection speed by 112.1% and 75.0%, respectively.
arXiv Detail & Related papers (2022-12-26T12:49:47Z) - Learning Dynamic View Synthesis With Few RGBD Cameras [60.36357774688289]
We propose to utilize RGBD cameras to synthesize free-viewpoint videos of dynamic indoor scenes.
We generate point clouds from RGBD frames and then render them into free-viewpoint videos via a neural feature.
We introduce a simple Regional Depth-Inpainting module that adaptively inpaints missing depth values to render complete novel views.
arXiv Detail & Related papers (2022-04-22T03:17:35Z) - Swarm behavior tracking based on a deep vision algorithm [5.070542698701158]
We propose a detection and tracking framework for multi-ant tracking in the videos.
Our method runs 6-10 times faster than existing methods for insect tracking.
arXiv Detail & Related papers (2022-04-07T09:32:12Z) - Motion-from-Blur: 3D Shape and Motion Estimation of Motion-blurred
Objects in Videos [115.71874459429381]
We propose a method for jointly estimating the 3D motion, 3D shape, and appearance of highly motion-blurred objects from a video.
Experiments on benchmark datasets demonstrate that our method outperforms previous methods for fast moving object deblurring and 3D reconstruction.
arXiv Detail & Related papers (2021-11-29T11:25:14Z) - An Efficient Insect Pest Classification Using Multiple Convolutional
Neural Network Based Models [0.3222802562733786]
Insect pest classification is a difficult task because of various kinds, scales, shapes, complex backgrounds in the field, and high appearance similarity among insect species.
We present different convolutional neural network-based models in this work, including attention, feature pyramid, and fine-grained models.
The experimental results show that combining these convolutional neural network-based models can better perform than the state-of-the-art methods on these two datasets.
arXiv Detail & Related papers (2021-07-26T12:53:28Z) - Betrayed by Motion: Camouflaged Object Discovery via Motion Segmentation [93.22300146395536]
We design a computational architecture that discovers camouflaged objects in videos, specifically by exploiting motion information to perform object segmentation.
We collect the first large-scale Moving Camouflaged Animals (MoCA) video dataset, which consists of over 140 clips across a diverse range of animals.
We demonstrate the effectiveness of the proposed model on MoCA, and achieve competitive performance on the unsupervised segmentation protocol on DAVIS2016 by only relying on motion.
arXiv Detail & Related papers (2020-11-23T18:59:08Z) - Fast Motion Understanding with Spatiotemporal Neural Networks and
Dynamic Vision Sensors [99.94079901071163]
This paper presents a Dynamic Vision Sensor (DVS) based system for reasoning about high speed motion.
We consider the case of a robot at rest reacting to a small, fast approaching object at speeds higher than 15m/s.
We highlight the results of our system to a toy dart moving at 23.4m/s with a 24.73deg error in $theta$, 18.4mm average discretized radius prediction error, and 25.03% median time to collision prediction error.
arXiv Detail & Related papers (2020-11-18T17:55:07Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.