Removing Human Bottlenecks in Bird Classification Using Camera Trap
Images and Deep Learning
- URL: http://arxiv.org/abs/2305.02097v1
- Date: Wed, 3 May 2023 13:04:39 GMT
- Title: Removing Human Bottlenecks in Bird Classification Using Camera Trap
Images and Deep Learning
- Authors: Carl Chalmers, Paul Fergus, Serge Wich, Steven N Longmore, Naomi
Davies Walsh, Philip Stephens, Chris Sutherland, Naomi Matthews, Jens Mudde,
Amira Nuseibeh
- Abstract summary: Monitoring bird populations is essential for ecologists.
Technology such as camera traps, acoustic monitors and drones provide methods for non-invasive monitoring.
There are two main problems with using camera traps for monitoring: a) cameras generate many images, making it difficult to process and analyse the data in a timely manner.
In this paper, we outline an approach for overcoming these issues by utilising deep learning for real-time classi-fication of bird species.
- Score: 0.14746127876003345
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Birds are important indicators for monitoring both biodiversity and habitat
health; they also play a crucial role in ecosystem management. Decline in bird
populations can result in reduced eco-system services, including seed
dispersal, pollination and pest control. Accurate and long-term monitoring of
birds to identify species of concern while measuring the success of
conservation interventions is essential for ecologists. However, monitoring is
time consuming, costly and often difficult to manage over long durations and at
meaningfully large spatial scales. Technology such as camera traps, acoustic
monitors and drones provide methods for non-invasive monitoring. There are two
main problems with using camera traps for monitoring: a) cameras generate many
images, making it difficult to process and analyse the data in a timely manner;
and b) the high proportion of false positives hinders the processing and
analysis for reporting. In this paper, we outline an approach for overcoming
these issues by utilising deep learning for real-time classi-fication of bird
species and automated removal of false positives in camera trap data. Images
are classified in real-time using a Faster-RCNN architecture. Images are
transmitted over 3/4G cam-eras and processed using Graphical Processing Units
(GPUs) to provide conservationists with key detection metrics therefore
removing the requirement for manual observations. Our models achieved an
average sensitivity of 88.79%, a specificity of 98.16% and accuracy of 96.71%.
This demonstrates the effectiveness of using deep learning for automatic bird
monitoring.
Related papers
- Multimodal Foundation Models for Zero-shot Animal Species Recognition in
Camera Trap Images [57.96659470133514]
Motion-activated camera traps constitute an efficient tool for tracking and monitoring wildlife populations across the globe.
Supervised learning techniques have been successfully deployed to analyze such imagery, however training such techniques requires annotations from experts.
Reducing the reliance on costly labelled data has immense potential in developing large-scale wildlife tracking solutions with markedly less human labor.
arXiv Detail & Related papers (2023-11-02T08:32:00Z) - Deep object detection for waterbird monitoring using aerial imagery [56.1262568293658]
In this work, we present a deep learning pipeline that can be used to precisely detect, count, and monitor waterbirds using aerial imagery collected by a commercial drone.
By utilizing convolutional neural network-based object detectors, we show that we can detect 16 classes of waterbird species that are commonly found in colonial nesting islands along the Texas coast.
arXiv Detail & Related papers (2022-10-10T17:37:56Z) - Florida Wildlife Camera Trap Dataset [48.99466876948454]
We introduce a challenging wildlife camera trap classification dataset collected from two different locations in Southwestern Florida.
The dataset consists of 104,495 images featuring visually similar species, varying illumination conditions, skewed class distribution, and including samples of endangered species.
arXiv Detail & Related papers (2021-06-23T18:53:15Z) - Potato Crop Stress Identification in Aerial Images using Deep
Learning-based Object Detection [60.83360138070649]
The paper presents an approach for analyzing aerial images of a potato crop using deep neural networks.
The main objective is to demonstrate automated spatial recognition of a healthy versus stressed crop at a plant level.
Experimental validation demonstrated the ability for distinguishing healthy and stressed plants in field images, achieving an average Dice coefficient of 0.74.
arXiv Detail & Related papers (2021-06-14T21:57:40Z) - Overcoming the Distance Estimation Bottleneck in Camera Trap Distance
Sampling [0.0]
Estimating animal abundance is of critical importance to assess, for example, the consequences of land-use change and invasive species on species composition.
This study proposes a completely automatized workflow utilizing state-of-the-art methods of image processing and pattern recognition.
arXiv Detail & Related papers (2021-05-10T10:17:34Z) - Towards Adversarial Patch Analysis and Certified Defense against Crowd
Counting [61.99564267735242]
Crowd counting has drawn much attention due to its importance in safety-critical surveillance systems.
Recent studies have demonstrated that deep neural network (DNN) methods are vulnerable to adversarial attacks.
We propose a robust attack strategy called Adversarial Patch Attack with Momentum to evaluate the robustness of crowd counting models.
arXiv Detail & Related papers (2021-04-22T05:10:55Z) - An explainable deep vision system for animal classification and
detection in trail-camera images with automatic post-deployment retraining [0.0]
This paper introduces an automated vision system for animal detection in trail-camera images taken from a field under the administration of the Texas Parks and Wildlife Department.
We implement a two-stage deep convolutional neural network pipeline to find animal-containing images in the first stage and then process these images to detect birds in the second stage.
The animal classification system classifies animal images with overall 93% sensitivity and 96% specificity. The bird detection system achieves better than 93% sensitivity, 92% specificity, and 68% average Intersection-over-Union rate.
arXiv Detail & Related papers (2020-10-22T06:29:55Z) - Unifying data for fine-grained visual species classification [15.14767769034929]
We present an initial deep convolutional neural network model, trained on 2.9M images across 465 fine-grained species.
The long-term goal is to enable scientists to make conservation recommendations from near real-time analysis of species abundance and population health.
arXiv Detail & Related papers (2020-09-24T01:04:18Z) - Automatic Detection and Recognition of Individuals in Patterned Species [4.163860911052052]
We develop a framework for automatic detection and recognition of individuals in different patterned species.
We use the recently proposed Faster-RCNN object detection framework to efficiently detect animals in images.
We evaluate our recognition system on zebra and jaguar images to show generalization to other patterned species.
arXiv Detail & Related papers (2020-05-06T15:29:21Z) - Automatic image-based identification and biomass estimation of
invertebrates [70.08255822611812]
Time-consuming sorting and identification of taxa pose strong limitations on how many insect samples can be processed.
We propose to replace the standard manual approach of human expert-based sorting and identification with an automatic image-based technology.
We use state-of-the-art Resnet-50 and InceptionV3 CNNs for the classification task.
arXiv Detail & Related papers (2020-02-05T21:38:57Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.