The iWildCam 2021 Competition Dataset
- URL: http://arxiv.org/abs/2105.03494v1
- Date: Fri, 7 May 2021 20:27:22 GMT
- Title: The iWildCam 2021 Competition Dataset
- Authors: Sara Beery, Arushi Agarwal, Elijah Cole, Vighnesh Birodkar
- Abstract summary: Ecologists use camera traps to monitor animal populations all over the world.
To estimate the abundance of a species, ecologists need to know not just which species were seen, but how many individuals of each species were seen.
We have prepared a challenge where the training data and test data are from different cameras spread across the globe.
- Score: 5.612688040565423
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Camera traps enable the automatic collection of large quantities of image
data. Ecologists use camera traps to monitor animal populations all over the
world. In order to estimate the abundance of a species from camera trap data,
ecologists need to know not just which species were seen, but also how many
individuals of each species were seen. Object detection techniques can be used
to find the number of individuals in each image. However, since camera traps
collect images in motion-triggered bursts, simply adding up the number of
detections over all frames is likely to lead to an incorrect estimate.
Overcoming these obstacles may require incorporating spatio-temporal reasoning
or individual re-identification in addition to traditional species detection
and classification.
We have prepared a challenge where the training data and test data are from
different cameras spread across the globe. The set of species seen in each
camera overlap, but are not identical. The challenge is to classify species and
count individual animals across sequences in the test cameras.
Related papers
- Multimodal Foundation Models for Zero-shot Animal Species Recognition in
Camera Trap Images [57.96659470133514]
Motion-activated camera traps constitute an efficient tool for tracking and monitoring wildlife populations across the globe.
Supervised learning techniques have been successfully deployed to analyze such imagery, however training such techniques requires annotations from experts.
Reducing the reliance on costly labelled data has immense potential in developing large-scale wildlife tracking solutions with markedly less human labor.
arXiv Detail & Related papers (2023-11-02T08:32:00Z) - Persistent Animal Identification Leveraging Non-Visual Markers [71.14999745312626]
We aim to locate and provide a unique identifier for each mouse in a cluttered home-cage environment through time.
This is a very challenging problem due to (i) the lack of distinguishing visual features for each mouse, and (ii) the close confines of the scene with constant occlusion.
Our approach achieves 77% accuracy on this animal identification problem, and is able to reject spurious detections when the animals are hidden.
arXiv Detail & Related papers (2021-12-13T17:11:32Z) - Cross-Camera Feature Prediction for Intra-Camera Supervised Person
Re-identification across Distant Scenes [70.30052164401178]
Person re-identification (Re-ID) aims to match person images across non-overlapping camera views.
ICS-DS Re-ID uses cross-camera unpaired data with intra-camera identity labels for training.
Cross-camera feature prediction method to mine cross-camera self supervision information.
Joint learning of global-level and local-level features forms a global-local cross-camera feature prediction scheme.
arXiv Detail & Related papers (2021-07-29T11:27:50Z) - Florida Wildlife Camera Trap Dataset [48.99466876948454]
We introduce a challenging wildlife camera trap classification dataset collected from two different locations in Southwestern Florida.
The dataset consists of 104,495 images featuring visually similar species, varying illumination conditions, skewed class distribution, and including samples of endangered species.
arXiv Detail & Related papers (2021-06-23T18:53:15Z) - Lift, Splat, Shoot: Encoding Images From Arbitrary Camera Rigs by
Implicitly Unprojecting to 3D [100.93808824091258]
We propose a new end-to-end architecture that directly extracts a bird's-eye-view representation of a scene given image data from an arbitrary number of cameras.
Our approach is to "lift" each image individually into a frustum of features for each camera, then "splat" all frustums into a bird's-eye-view grid.
We show that the representations inferred by our model enable interpretable end-to-end motion planning by "shooting" template trajectories into a bird's-eye-view cost map output by our network.
arXiv Detail & Related papers (2020-08-13T06:29:01Z) - WhoAmI: An Automatic Tool for Visual Recognition of Tiger and Leopard
Individuals in the Wild [3.1708876837195157]
We develop automatic algorithms that are able to detect animals, identify the species of animals and to recognize individual animals for two species.
We demonstrate the effectiveness of our approach on a data set of camera-trap images recorded in the jungles of Southern India.
arXiv Detail & Related papers (2020-06-17T16:17:46Z) - Automatic Detection and Recognition of Individuals in Patterned Species [4.163860911052052]
We develop a framework for automatic detection and recognition of individuals in different patterned species.
We use the recently proposed Faster-RCNN object detection framework to efficiently detect animals in images.
We evaluate our recognition system on zebra and jaguar images to show generalization to other patterned species.
arXiv Detail & Related papers (2020-05-06T15:29:21Z) - Sequence Information Channel Concatenation for Improving Camera Trap
Image Burst Classification [1.94742788320879]
Camera Traps are extensively used to observe wildlife in their natural habitat without disturbing the ecosystem.
Currently, a massive number of such camera traps have been deployed at various ecological conservation areas around the world, collecting data for decades.
Existing systems perform classification to detect if images contain animals by considering a single image.
We show that concatenating masks containing sequence information and the images from the 3-image-burst across channels, improves the ROC AUC by 20% on a test-set from unseen camera-sites.
arXiv Detail & Related papers (2020-04-30T21:47:14Z) - The iWildCam 2020 Competition Dataset [9.537627294351292]
Camera traps enable the automatic collection of large quantities of image data.
We have recently been making strides towards automatic species classification in camera trap images.
We have prepared a challenge where the training data and test data are from different cameras spread across the globe.
The challenge is to correctly classify species in the test camera traps.
arXiv Detail & Related papers (2020-04-21T23:25:13Z) - Automatic image-based identification and biomass estimation of
invertebrates [70.08255822611812]
Time-consuming sorting and identification of taxa pose strong limitations on how many insect samples can be processed.
We propose to replace the standard manual approach of human expert-based sorting and identification with an automatic image-based technology.
We use state-of-the-art Resnet-50 and InceptionV3 CNNs for the classification task.
arXiv Detail & Related papers (2020-02-05T21:38:57Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.