An explainable deep vision system for animal classification and
detection in trail-camera images with automatic post-deployment retraining
- URL: http://arxiv.org/abs/2010.11472v3
- Date: Tue, 8 Dec 2020 02:17:00 GMT
- Title: An explainable deep vision system for animal classification and
detection in trail-camera images with automatic post-deployment retraining
- Authors: Golnaz Moallem (1), Don D. Pathirage (1), Joel Reznick (1), James
Gallagher (2), Hamed Sari-Sarraf (1) ((1) Applied Vision Lab Texas Tech
University (2) Texas Parks and Wildlife Department)
- Abstract summary: This paper introduces an automated vision system for animal detection in trail-camera images taken from a field under the administration of the Texas Parks and Wildlife Department.
We implement a two-stage deep convolutional neural network pipeline to find animal-containing images in the first stage and then process these images to detect birds in the second stage.
The animal classification system classifies animal images with overall 93% sensitivity and 96% specificity. The bird detection system achieves better than 93% sensitivity, 92% specificity, and 68% average Intersection-over-Union rate.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: This paper introduces an automated vision system for animal detection in
trail-camera images taken from a field under the administration of the Texas
Parks and Wildlife Department. As traditional wildlife counting techniques are
intrusive and labor intensive to conduct, trail-camera imaging is a
comparatively non-intrusive method for capturing wildlife activity. However,
given the large volume of images produced from trail-cameras, manual analysis
of the images remains time-consuming and inefficient. We implemented a
two-stage deep convolutional neural network pipeline to find animal-containing
images in the first stage and then process these images to detect birds in the
second stage. The animal classification system classifies animal images with
overall 93% sensitivity and 96% specificity. The bird detection system achieves
better than 93% sensitivity, 92% specificity, and 68% average
Intersection-over-Union rate. The entire pipeline processes an image in less
than 0.5 seconds as opposed to an average 30 seconds for a human labeler. We
also addressed post-deployment issues related to data drift for the animal
classification system as image features vary with seasonal changes. This system
utilizes an automatic retraining algorithm to detect data drift and update the
system. We introduce a novel technique for detecting drifted images and
triggering the retraining procedure. Two statistical experiments are also
presented to explain the prediction behavior of the animal classification
system. These experiments investigate the cues that steers the system towards a
particular decision. Statistical hypothesis testing demonstrates that the
presence of an animal in the input image significantly contributes to the
system's decisions.
Related papers
- Metadata augmented deep neural networks for wild animal classification [4.466592229376465]
This study introduces a novel approach that enhances wild animal classification by combining specific metadata with image data.
Using a dataset focused on the Norwegian climate, our models show an accuracy increase from 98.4% to 98.9% compared to existing methods.
arXiv Detail & Related papers (2024-09-07T13:36:26Z) - Multimodal Foundation Models for Zero-shot Animal Species Recognition in
Camera Trap Images [57.96659470133514]
Motion-activated camera traps constitute an efficient tool for tracking and monitoring wildlife populations across the globe.
Supervised learning techniques have been successfully deployed to analyze such imagery, however training such techniques requires annotations from experts.
Reducing the reliance on costly labelled data has immense potential in developing large-scale wildlife tracking solutions with markedly less human labor.
arXiv Detail & Related papers (2023-11-02T08:32:00Z) - TempNet: Temporal Attention Towards the Detection of Animal Behaviour in
Videos [63.85815474157357]
We propose an efficient computer vision- and deep learning-based method for the detection of biological behaviours in videos.
TempNet uses an encoder bridge and residual blocks to maintain model performance with a two-staged, spatial, then temporal, encoder.
We demonstrate its application to the detection of sablefish (Anoplopoma fimbria) startle events.
arXiv Detail & Related papers (2022-11-17T23:55:12Z) - Active Gaze Control for Foveal Scene Exploration [124.11737060344052]
We propose a methodology to emulate how humans and robots with foveal cameras would explore a scene.
The proposed method achieves an increase in detection F1-score of 2-3 percentage points for the same number of gaze shifts.
arXiv Detail & Related papers (2022-08-24T14:59:28Z) - Florida Wildlife Camera Trap Dataset [48.99466876948454]
We introduce a challenging wildlife camera trap classification dataset collected from two different locations in Southwestern Florida.
The dataset consists of 104,495 images featuring visually similar species, varying illumination conditions, skewed class distribution, and including samples of endangered species.
arXiv Detail & Related papers (2021-06-23T18:53:15Z) - Exploiting Depth Information for Wildlife Monitoring [0.0]
We propose an automated camera trap-based approach to detect and identify animals using depth estimation.
To detect and identify individual animals, we propose a novel method D-Mask R-CNN for the so-called instance segmentation.
An experimental evaluation shows the benefit of the additional depth estimation in terms of improved average precision scores of the animal detection.
arXiv Detail & Related papers (2021-02-10T18:10:34Z) - WhoAmI: An Automatic Tool for Visual Recognition of Tiger and Leopard
Individuals in the Wild [3.1708876837195157]
We develop automatic algorithms that are able to detect animals, identify the species of animals and to recognize individual animals for two species.
We demonstrate the effectiveness of our approach on a data set of camera-trap images recorded in the jungles of Southern India.
arXiv Detail & Related papers (2020-06-17T16:17:46Z) - Automatic Detection and Recognition of Individuals in Patterned Species [4.163860911052052]
We develop a framework for automatic detection and recognition of individuals in different patterned species.
We use the recently proposed Faster-RCNN object detection framework to efficiently detect animals in images.
We evaluate our recognition system on zebra and jaguar images to show generalization to other patterned species.
arXiv Detail & Related papers (2020-05-06T15:29:21Z) - Remote Sensing Image Scene Classification Meets Deep Learning:
Challenges, Methods, Benchmarks, and Opportunities [81.29441139530844]
This paper provides a systematic survey of deep learning methods for remote sensing image scene classification by covering more than 160 papers.
We discuss the main challenges of remote sensing image scene classification and survey.
We introduce the benchmarks used for remote sensing image scene classification and summarize the performance of more than two dozen representative algorithms.
arXiv Detail & Related papers (2020-05-03T14:18:00Z) - Sequence Information Channel Concatenation for Improving Camera Trap
Image Burst Classification [1.94742788320879]
Camera Traps are extensively used to observe wildlife in their natural habitat without disturbing the ecosystem.
Currently, a massive number of such camera traps have been deployed at various ecological conservation areas around the world, collecting data for decades.
Existing systems perform classification to detect if images contain animals by considering a single image.
We show that concatenating masks containing sequence information and the images from the 3-image-burst across channels, improves the ROC AUC by 20% on a test-set from unseen camera-sites.
arXiv Detail & Related papers (2020-04-30T21:47:14Z) - Automatic image-based identification and biomass estimation of
invertebrates [70.08255822611812]
Time-consuming sorting and identification of taxa pose strong limitations on how many insect samples can be processed.
We propose to replace the standard manual approach of human expert-based sorting and identification with an automatic image-based technology.
We use state-of-the-art Resnet-50 and InceptionV3 CNNs for the classification task.
arXiv Detail & Related papers (2020-02-05T21:38:57Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.