Detecting Cattle and Elk in the Wild from Space
- URL: http://arxiv.org/abs/2106.15448v1
- Date: Tue, 29 Jun 2021 14:35:23 GMT
- Title: Detecting Cattle and Elk in the Wild from Space
- Authors: Caleb Robinson, Anthony Ortiz, Lacey Hughey, Jared A. Stabach, Juan M.
Lavista Ferres
- Abstract summary: Localizing and counting large ungulates in satellite imagery is an important task for supporting ecological studies.
We propose a baseline method, CowNet, that simultaneously estimates the number of animals in an image (counts) and predicts their location at a pixel level (localizes)
We specifically test the temporal generalization of the resulting models over a large landscape in Point Reyes Seashore, CA.
- Score: 6.810164473908359
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Localizing and counting large ungulates -- hoofed mammals like cows and elk
-- in very high-resolution satellite imagery is an important task for
supporting ecological studies. Prior work has shown that this is feasible with
deep learning based methods and sub-meter multi-spectral satellite imagery. We
extend this line of work by proposing a baseline method, CowNet, that
simultaneously estimates the number of animals in an image (counts), as well as
predicts their location at a pixel level (localizes). We also propose an
methodology for evaluating such models on counting and localization tasks
across large scenes that takes the uncertainty of noisy labels and the
information needed by stakeholders in ecological monitoring tasks into account.
Finally, we benchmark our baseline method with state of the art vision methods
for counting objects in scenes. We specifically test the temporal
generalization of the resulting models over a large landscape in Point Reyes
Seashore, CA. We find that the LC-FCN model performs the best and achieves an
average precision between 0.56 and 0.61 and an average recall between 0.78 and
0.92 over three held out test scenes.
Related papers
- PoseBench: Benchmarking the Robustness of Pose Estimation Models under Corruptions [57.871692507044344]
Pose estimation aims to accurately identify anatomical keypoints in humans and animals using monocular images.
Current models are typically trained and tested on clean data, potentially overlooking the corruption during real-world deployment.
We introduce PoseBench, a benchmark designed to evaluate the robustness of pose estimation models against real-world corruption.
arXiv Detail & Related papers (2024-06-20T14:40:17Z) - Universal Bovine Identification via Depth Data and Deep Metric Learning [1.6605913858547239]
This paper proposes and evaluates, for the first time, a depth-only deep learning system for accurately identifying individual cattle.
An increase in herd size skews the cow-to-human ratio at the farm and makes the manual monitoring of individuals more challenging.
Underpinned by our previous work, this paper introduces a deep-metric learning method for cattle identification using depth data from an off-the-shelf 3D camera.
arXiv Detail & Related papers (2024-03-29T22:03:53Z) - Spatial Implicit Neural Representations for Global-Scale Species Mapping [72.92028508757281]
Given a set of locations where a species has been observed, the goal is to build a model to predict whether the species is present or absent at any location.
Traditional methods struggle to take advantage of emerging large-scale crowdsourced datasets.
We use Spatial Implicit Neural Representations (SINRs) to jointly estimate the geographical range of 47k species simultaneously.
arXiv Detail & Related papers (2023-06-05T03:36:01Z) - TempNet: Temporal Attention Towards the Detection of Animal Behaviour in
Videos [63.85815474157357]
We propose an efficient computer vision- and deep learning-based method for the detection of biological behaviours in videos.
TempNet uses an encoder bridge and residual blocks to maintain model performance with a two-staged, spatial, then temporal, encoder.
We demonstrate its application to the detection of sablefish (Anoplopoma fimbria) startle events.
arXiv Detail & Related papers (2022-11-17T23:55:12Z) - Active Gaze Control for Foveal Scene Exploration [124.11737060344052]
We propose a methodology to emulate how humans and robots with foveal cameras would explore a scene.
The proposed method achieves an increase in detection F1-score of 2-3 percentage points for the same number of gaze shifts.
arXiv Detail & Related papers (2022-08-24T14:59:28Z) - Point Label Aware Superpixels for Multi-species Segmentation of
Underwater Imagery [4.195806160139487]
Monitoring coral reefs using underwater vehicles increases the range of marine surveys and availability of historical ecological data.
We propose a point label aware method for propagating labels within superpixel regions to obtain augmented ground truth for training a semantic segmentation model.
Our method outperforms prior methods on the UCSD Mosaics dataset by 3.62% for pixel accuracy and 8.35% for mean IoU for the label propagation task.
arXiv Detail & Related papers (2022-02-27T23:46:43Z) - Distance Estimation and Animal Tracking for Wildlife Camera Trapping [0.0]
We propose a fully automatic approach to estimate camera-to-animal distances.
We leverage state-of-the-art relative MDE and a novel alignment procedure to estimate metric distances.
We achieve a mean absolute distance estimation error of only 0.9864 meters at a precision of 90.3% and recall of 63.8%.
arXiv Detail & Related papers (2022-02-09T18:12:18Z) - Deep learning with self-supervision and uncertainty regularization to
count fish in underwater images [28.261323753321328]
Effective conservation actions require effective population monitoring.
Monitoring populations through image sampling has made data collection cheaper, wide-reaching and less intrusive.
Counting animals from such data is challenging, particularly when densely packed in noisy images.
Deep learning is the state-of-the-art method for many computer vision tasks, but it has yet to be properly explored to count animals.
arXiv Detail & Related papers (2021-04-30T13:02:19Z) - Pretrained equivariant features improve unsupervised landmark discovery [69.02115180674885]
We formulate a two-step unsupervised approach that overcomes this challenge by first learning powerful pixel-based features.
Our method produces state-of-the-art results in several challenging landmark detection datasets.
arXiv Detail & Related papers (2021-04-07T05:42:11Z) - Latent World Models For Intrinsically Motivated Exploration [140.21871701134626]
We present a self-supervised representation learning method for image-based observations.
We consider episodic and life-long uncertainties to guide the exploration of partially observable environments.
arXiv Detail & Related papers (2020-10-05T19:47:04Z) - A Realistic Fish-Habitat Dataset to Evaluate Algorithms for Underwater
Visual Analysis [2.6476746128312194]
We present DeepFish as a benchmark suite with a large-scale dataset to train and test methods for several computer vision tasks.
The dataset consists of approximately 40 thousand images collected underwater from 20 greenhabitats in the marine-environments of tropical Australia.
Our experiments provide an in-depth analysis of the dataset characteristics, and the performance evaluation of several state-of-the-art approaches.
arXiv Detail & Related papers (2020-08-28T12:20:59Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.