The iWildCam 2020 Competition Dataset
- URL: http://arxiv.org/abs/2004.10340v1
- Date: Tue, 21 Apr 2020 23:25:13 GMT
- Title: The iWildCam 2020 Competition Dataset
- Authors: Sara Beery, Elijah Cole, Arvi Gjoka
- Abstract summary: Camera traps enable the automatic collection of large quantities of image data.
We have recently been making strides towards automatic species classification in camera trap images.
We have prepared a challenge where the training data and test data are from different cameras spread across the globe.
The challenge is to correctly classify species in the test camera traps.
- Score: 9.537627294351292
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Camera traps enable the automatic collection of large quantities of image
data. Biologists all over the world use camera traps to monitor animal
populations. We have recently been making strides towards automatic species
classification in camera trap images. However, as we try to expand the
geographic scope of these models we are faced with an interesting question: how
do we train models that perform well on new (unseen during training) camera
trap locations? Can we leverage data from other modalities, such as citizen
science data and remote sensing data? In order to tackle this problem, we have
prepared a challenge where the training data and test data are from different
cameras spread across the globe. For each camera, we provide a series of remote
sensing imagery that is tied to the location of the camera. We also provide
citizen science imagery from the set of species seen in our data. The challenge
is to correctly classify species in the test camera traps.
Related papers
- Redundancy-Aware Camera Selection for Indoor Scene Neural Rendering [54.468355408388675]
We build a similarity matrix that incorporates both the spatial diversity of the cameras and the semantic variation of the images.
We apply a diversity-based sampling algorithm to optimize the camera selection.
We also develop a new dataset, IndoorTraj, which includes long and complex camera movements captured by humans in virtual indoor environments.
arXiv Detail & Related papers (2024-09-11T08:36:49Z) - Low-power, Continuous Remote Behavioral Localization with Event Cameras [9.107129038623242]
Event cameras offer unique advantages for battery-dependent remote monitoring.
We use this sensor to quantify a behavior in Chinstrap penguins called ecstatic display.
Experiments show that the event cameras' natural response to motion is effective for continuous behavior monitoring and detection.
arXiv Detail & Related papers (2023-12-06T14:58:03Z) - Multimodal Foundation Models for Zero-shot Animal Species Recognition in
Camera Trap Images [57.96659470133514]
Motion-activated camera traps constitute an efficient tool for tracking and monitoring wildlife populations across the globe.
Supervised learning techniques have been successfully deployed to analyze such imagery, however training such techniques requires annotations from experts.
Reducing the reliance on costly labelled data has immense potential in developing large-scale wildlife tracking solutions with markedly less human labor.
arXiv Detail & Related papers (2023-11-02T08:32:00Z) - Deep Learning for Camera Calibration and Beyond: A Survey [100.75060862015945]
Camera calibration involves estimating camera parameters to infer geometric features from captured sequences.
Recent efforts show that learning-based solutions have the potential to be used in place of the repeatability works of manual calibrations.
arXiv Detail & Related papers (2023-03-19T04:00:05Z) - Cross-Camera Feature Prediction for Intra-Camera Supervised Person
Re-identification across Distant Scenes [70.30052164401178]
Person re-identification (Re-ID) aims to match person images across non-overlapping camera views.
ICS-DS Re-ID uses cross-camera unpaired data with intra-camera identity labels for training.
Cross-camera feature prediction method to mine cross-camera self supervision information.
Joint learning of global-level and local-level features forms a global-local cross-camera feature prediction scheme.
arXiv Detail & Related papers (2021-07-29T11:27:50Z) - Florida Wildlife Camera Trap Dataset [48.99466876948454]
We introduce a challenging wildlife camera trap classification dataset collected from two different locations in Southwestern Florida.
The dataset consists of 104,495 images featuring visually similar species, varying illumination conditions, skewed class distribution, and including samples of endangered species.
arXiv Detail & Related papers (2021-06-23T18:53:15Z) - The iWildCam 2021 Competition Dataset [5.612688040565423]
Ecologists use camera traps to monitor animal populations all over the world.
To estimate the abundance of a species, ecologists need to know not just which species were seen, but how many individuals of each species were seen.
We have prepared a challenge where the training data and test data are from different cameras spread across the globe.
arXiv Detail & Related papers (2021-05-07T20:27:22Z) - Lift, Splat, Shoot: Encoding Images From Arbitrary Camera Rigs by
Implicitly Unprojecting to 3D [100.93808824091258]
We propose a new end-to-end architecture that directly extracts a bird's-eye-view representation of a scene given image data from an arbitrary number of cameras.
Our approach is to "lift" each image individually into a frustum of features for each camera, then "splat" all frustums into a bird's-eye-view grid.
We show that the representations inferred by our model enable interpretable end-to-end motion planning by "shooting" template trajectories into a bird's-eye-view cost map output by our network.
arXiv Detail & Related papers (2020-08-13T06:29:01Z) - WhoAmI: An Automatic Tool for Visual Recognition of Tiger and Leopard
Individuals in the Wild [3.1708876837195157]
We develop automatic algorithms that are able to detect animals, identify the species of animals and to recognize individual animals for two species.
We demonstrate the effectiveness of our approach on a data set of camera-trap images recorded in the jungles of Southern India.
arXiv Detail & Related papers (2020-06-17T16:17:46Z) - Sequence Information Channel Concatenation for Improving Camera Trap
Image Burst Classification [1.94742788320879]
Camera Traps are extensively used to observe wildlife in their natural habitat without disturbing the ecosystem.
Currently, a massive number of such camera traps have been deployed at various ecological conservation areas around the world, collecting data for decades.
Existing systems perform classification to detect if images contain animals by considering a single image.
We show that concatenating masks containing sequence information and the images from the 3-image-burst across channels, improves the ROC AUC by 20% on a test-set from unseen camera-sites.
arXiv Detail & Related papers (2020-04-30T21:47:14Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.