Application of deep learning to camera trap data for ecologists in
planning / engineering -- Can captivity imagery train a model which
generalises to the wild?
- URL: http://arxiv.org/abs/2111.12805v1
- Date: Wed, 24 Nov 2021 21:29:14 GMT
- Title: Application of deep learning to camera trap data for ecologists in
planning / engineering -- Can captivity imagery train a model which
generalises to the wild?
- Authors: Ryan Curry and Cameron Trotter and Andrew Stephen McGough
- Abstract summary: Deep learning models can be trained to automatically detect and classify animals within camera trap imagery.
This research proposes an approach of using images of rare animals in captivity to generate the training dataset.
- Score: 0.0
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: Understanding the abundance of a species is the first step towards
understanding both its long-term sustainability and the impact that we may be
having upon it. Ecologists use camera traps to remotely survey for the presence
of specific animal species. Previous studies have shown that deep learning
models can be trained to automatically detect and classify animals within
camera trap imagery with high levels of confidence. However, the ability to
train these models is reliant upon having enough high-quality training data.
What happens when the animal is rare or the data sets are non-existent? This
research proposes an approach of using images of rare animals in captivity
(focusing on the Scottish wildcat) to generate the training dataset. We explore
the challenges associated with generalising a model trained on captivity data
when applied to data collected in the wild. The research is contextualised by
the needs of ecologists in planning/engineering. Following precedents from
other research, this project establishes an ensemble for object detection,
image segmentation and image classification models which are then tested using
different image manipulation and class structuring techniques to encourage
model generalisation. The research concludes, in the context of Scottish
wildcat, that models trained on captivity imagery cannot be generalised to wild
camera trap imagery using existing techniques. However, final model
performances based on a two-class model Wildcat vs Not Wildcat achieved an
overall accuracy score of 81.6% and Wildcat accuracy score of 54.8% on a test
set in which only 1% of images contained a wildcat. This suggests using
captivity images is feasible with further research. This is the first research
which attempts to generate a training set based on captivity data and the first
to explore the development of such models in the context of ecologists in
planning/engineering.
Related papers
- Metadata augmented deep neural networks for wild animal classification [4.466592229376465]
This study introduces a novel approach that enhances wild animal classification by combining specific metadata with image data.
Using a dataset focused on the Norwegian climate, our models show an accuracy increase from 98.4% to 98.9% compared to existing methods.
arXiv Detail & Related papers (2024-09-07T13:36:26Z) - Learning the 3D Fauna of the Web [70.01196719128912]
We develop 3D-Fauna, an approach that learns a pan-category deformable 3D animal model for more than 100 animal species jointly.
One crucial bottleneck of modeling animals is the limited availability of training data.
We show that prior category-specific attempts fail to generalize to rare species with limited training images.
arXiv Detail & Related papers (2024-01-04T18:32:48Z) - Multimodal Foundation Models for Zero-shot Animal Species Recognition in
Camera Trap Images [57.96659470133514]
Motion-activated camera traps constitute an efficient tool for tracking and monitoring wildlife populations across the globe.
Supervised learning techniques have been successfully deployed to analyze such imagery, however training such techniques requires annotations from experts.
Reducing the reliance on costly labelled data has immense potential in developing large-scale wildlife tracking solutions with markedly less human labor.
arXiv Detail & Related papers (2023-11-02T08:32:00Z) - MagicPony: Learning Articulated 3D Animals in the Wild [81.63322697335228]
We present a new method, dubbed MagicPony, that learns this predictor purely from in-the-wild single-view images of the object category.
At its core is an implicit-explicit representation of articulated shape and appearance, combining the strengths of neural fields and meshes.
arXiv Detail & Related papers (2022-11-22T18:59:31Z) - Florida Wildlife Camera Trap Dataset [48.99466876948454]
We introduce a challenging wildlife camera trap classification dataset collected from two different locations in Southwestern Florida.
The dataset consists of 104,495 images featuring visually similar species, varying illumination conditions, skewed class distribution, and including samples of endangered species.
arXiv Detail & Related papers (2021-06-23T18:53:15Z) - How many images do I need? Understanding how sample size per class
affects deep learning model performance metrics for balanced designs in
autonomous wildlife monitoring [0.0]
We explore in depth the issues of deep learning model performance for progressively increasing per class (species) sample sizes.
We provide ecologists with an approximation formula to estimate how many images per animal species they need for certain accuracy level a priori.
arXiv Detail & Related papers (2020-10-16T06:28:35Z) - WhoAmI: An Automatic Tool for Visual Recognition of Tiger and Leopard
Individuals in the Wild [3.1708876837195157]
We develop automatic algorithms that are able to detect animals, identify the species of animals and to recognize individual animals for two species.
We demonstrate the effectiveness of our approach on a data set of camera-trap images recorded in the jungles of Southern India.
arXiv Detail & Related papers (2020-06-17T16:17:46Z) - Transferring Dense Pose to Proximal Animal Classes [83.84439508978126]
We show that it is possible to transfer the knowledge existing in dense pose recognition for humans, as well as in more general object detectors and segmenters, to the problem of dense pose recognition in other classes.
We do this by establishing a DensePose model for the new animal which is also geometrically aligned to humans.
We also introduce two benchmark datasets labelled in the manner of DensePose for the class chimpanzee and use them to evaluate our approach.
arXiv Detail & Related papers (2020-02-28T21:43:53Z) - Deformation-aware Unpaired Image Translation for Pose Estimation on
Laboratory Animals [56.65062746564091]
We aim to capture the pose of neuroscience model organisms, without using any manual supervision, to study how neural circuits orchestrate behaviour.
Our key contribution is the explicit and independent modeling of appearance, shape and poses in an unpaired image translation framework.
We demonstrate improved pose estimation accuracy on Drosophila melanogaster (fruit fly), Caenorhabditis elegans (worm) and Danio rerio (zebrafish)
arXiv Detail & Related papers (2020-01-23T15:34:11Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.