WhoAmI: An Automatic Tool for Visual Recognition of Tiger and Leopard
Individuals in the Wild
- URL: http://arxiv.org/abs/2006.09962v1
- Date: Wed, 17 Jun 2020 16:17:46 GMT
- Title: WhoAmI: An Automatic Tool for Visual Recognition of Tiger and Leopard
Individuals in the Wild
- Authors: Rita Pucci, Jitendra Shankaraiah, Devcharan Jathanna, Ullas Karanth,
and Kartic Subr
- Abstract summary: We develop automatic algorithms that are able to detect animals, identify the species of animals and to recognize individual animals for two species.
We demonstrate the effectiveness of our approach on a data set of camera-trap images recorded in the jungles of Southern India.
- Score: 3.1708876837195157
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Photographs of wild animals in their natural habitats can be recorded
unobtrusively via cameras that are triggered by motion nearby. The installation
of such camera traps is becoming increasingly common across the world. Although
this is a convenient source of invaluable data for biologists, ecologists and
conservationists, the arduous task of poring through potentially millions of
pictures each season introduces prohibitive costs and frustrating delays. We
develop automatic algorithms that are able to detect animals, identify the
species of animals and to recognize individual animals for two species. we
propose the first fully-automatic tool that can recognize specific individuals
of leopard and tiger due to their characteristic body markings. We adopt a
class of supervised learning approach of machine learning where a Deep
Convolutional Neural Network (DCNN) is trained using several instances of
manually-labelled images for each of the three classification tasks. We
demonstrate the effectiveness of our approach on a data set of camera-trap
images recorded in the jungles of Southern India.
Related papers
- Learning the 3D Fauna of the Web [70.01196719128912]
We develop 3D-Fauna, an approach that learns a pan-category deformable 3D animal model for more than 100 animal species jointly.
One crucial bottleneck of modeling animals is the limited availability of training data.
We show that prior category-specific attempts fail to generalize to rare species with limited training images.
arXiv Detail & Related papers (2024-01-04T18:32:48Z) - Multimodal Foundation Models for Zero-shot Animal Species Recognition in
Camera Trap Images [57.96659470133514]
Motion-activated camera traps constitute an efficient tool for tracking and monitoring wildlife populations across the globe.
Supervised learning techniques have been successfully deployed to analyze such imagery, however training such techniques requires annotations from experts.
Reducing the reliance on costly labelled data has immense potential in developing large-scale wildlife tracking solutions with markedly less human labor.
arXiv Detail & Related papers (2023-11-02T08:32:00Z) - Florida Wildlife Camera Trap Dataset [48.99466876948454]
We introduce a challenging wildlife camera trap classification dataset collected from two different locations in Southwestern Florida.
The dataset consists of 104,495 images featuring visually similar species, varying illumination conditions, skewed class distribution, and including samples of endangered species.
arXiv Detail & Related papers (2021-06-23T18:53:15Z) - The iWildCam 2021 Competition Dataset [5.612688040565423]
Ecologists use camera traps to monitor animal populations all over the world.
To estimate the abundance of a species, ecologists need to know not just which species were seen, but how many individuals of each species were seen.
We have prepared a challenge where the training data and test data are from different cameras spread across the globe.
arXiv Detail & Related papers (2021-05-07T20:27:22Z) - AcinoSet: A 3D Pose Estimation Dataset and Baseline Models for Cheetahs
in the Wild [51.35013619649463]
We present an extensive dataset of free-running cheetahs in the wild, called AcinoSet.
The dataset contains 119,490 frames of multi-view synchronized high-speed video footage, camera calibration files and 7,588 human-annotated frames.
The resulting 3D trajectories, human-checked 3D ground truth, and an interactive tool to inspect the data is also provided.
arXiv Detail & Related papers (2021-03-24T15:54:11Z) - Exploiting Depth Information for Wildlife Monitoring [0.0]
We propose an automated camera trap-based approach to detect and identify animals using depth estimation.
To detect and identify individual animals, we propose a novel method D-Mask R-CNN for the so-called instance segmentation.
An experimental evaluation shows the benefit of the additional depth estimation in terms of improved average precision scores of the animal detection.
arXiv Detail & Related papers (2021-02-10T18:10:34Z) - Automatic Detection and Recognition of Individuals in Patterned Species [4.163860911052052]
We develop a framework for automatic detection and recognition of individuals in different patterned species.
We use the recently proposed Faster-RCNN object detection framework to efficiently detect animals in images.
We evaluate our recognition system on zebra and jaguar images to show generalization to other patterned species.
arXiv Detail & Related papers (2020-05-06T15:29:21Z) - Sequence Information Channel Concatenation for Improving Camera Trap
Image Burst Classification [1.94742788320879]
Camera Traps are extensively used to observe wildlife in their natural habitat without disturbing the ecosystem.
Currently, a massive number of such camera traps have been deployed at various ecological conservation areas around the world, collecting data for decades.
Existing systems perform classification to detect if images contain animals by considering a single image.
We show that concatenating masks containing sequence information and the images from the 3-image-burst across channels, improves the ROC AUC by 20% on a test-set from unseen camera-sites.
arXiv Detail & Related papers (2020-04-30T21:47:14Z) - Transferring Dense Pose to Proximal Animal Classes [83.84439508978126]
We show that it is possible to transfer the knowledge existing in dense pose recognition for humans, as well as in more general object detectors and segmenters, to the problem of dense pose recognition in other classes.
We do this by establishing a DensePose model for the new animal which is also geometrically aligned to humans.
We also introduce two benchmark datasets labelled in the manner of DensePose for the class chimpanzee and use them to evaluate our approach.
arXiv Detail & Related papers (2020-02-28T21:43:53Z) - Automatic image-based identification and biomass estimation of
invertebrates [70.08255822611812]
Time-consuming sorting and identification of taxa pose strong limitations on how many insect samples can be processed.
We propose to replace the standard manual approach of human expert-based sorting and identification with an automatic image-based technology.
We use state-of-the-art Resnet-50 and InceptionV3 CNNs for the classification task.
arXiv Detail & Related papers (2020-02-05T21:38:57Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.