SOCRATES: A Stereo Camera Trap for Monitoring of Biodiversity
- URL: http://arxiv.org/abs/2209.09070v1
- Date: Mon, 19 Sep 2022 15:03:35 GMT
- Title: SOCRATES: A Stereo Camera Trap for Monitoring of Biodiversity
- Authors: Timm Haucke, Hjalmar K\"uhl, Volker Steinhage
- Abstract summary: This study presents a novel approach to 3D camera trapping featuring highly optimized hardware and software.
A comprehensive evaluation of SOCRATES shows not only a $3.23%$ improvement in animal detection but also its superior applicability for estimating animal abundance.
- Score: 0.0
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: The development and application of modern technology is an essential basis
for the efficient monitoring of species in natural habitats and landscapes to
trace the development of ecosystems, species communities, and populations, and
to analyze reasons of changes. For estimating animal abundance using methods
such as camera trap distance sampling, spatial information of natural habitats
in terms of 3D (three-dimensional) measurements is crucial. Additionally, 3D
information improves the accuracy of animal detection using camera trapping.
This study presents a novel approach to 3D camera trapping featuring highly
optimized hardware and software. This approach employs stereo vision to infer
3D information of natural habitats and is designated as StereO CameRA Trap for
monitoring of biodivErSity (SOCRATES). A comprehensive evaluation of SOCRATES
shows not only a $3.23\%$ improvement in animal detection (bounding box
$\text{mAP}_{75}$) but also its superior applicability for estimating animal
abundance using camera trap distance sampling. The software and documentation
of SOCRATES is provided at https://github.com/timmh/socrates
Related papers
- Benchmarking Monocular 3D Dog Pose Estimation Using In-The-Wild Motion Capture Data [17.042955091063444]
We introduce a new benchmark analysis focusing on 3D canine pose estimation from monocular in-the-wild images.
A multi-modal dataset 3DDogs-Lab was captured indoors, featuring various dog breeds trotting on a walkway.
We create 3DDogs-Wild, a naturalised version of the dataset where the optical markers are in-painted and the subjects are placed in diverse environments.
We show that using the 3DDogs-Wild to train the models leads to improved performance when evaluating on in-the-wild data.
arXiv Detail & Related papers (2024-06-20T15:33:39Z) - SDGE: Stereo Guided Depth Estimation for 360$^\circ$ Camera Sets [65.64958606221069]
Multi-camera systems are often used in autonomous driving to achieve a 360$circ$ perception.
These 360$circ$ camera sets often have limited or low-quality overlap regions, making multi-view stereo methods infeasible for the entire image.
We propose the Stereo Guided Depth Estimation (SGDE) method, which enhances depth estimation of the full image by explicitly utilizing multi-view stereo results on the overlap.
arXiv Detail & Related papers (2024-02-19T02:41:37Z) - Multimodal Foundation Models for Zero-shot Animal Species Recognition in
Camera Trap Images [57.96659470133514]
Motion-activated camera traps constitute an efficient tool for tracking and monitoring wildlife populations across the globe.
Supervised learning techniques have been successfully deployed to analyze such imagery, however training such techniques requires annotations from experts.
Reducing the reliance on costly labelled data has immense potential in developing large-scale wildlife tracking solutions with markedly less human labor.
arXiv Detail & Related papers (2023-11-02T08:32:00Z) - Collaboration Helps Camera Overtake LiDAR in 3D Detection [49.58433319402405]
Camera-only 3D detection provides a simple solution for localizing objects in 3D space compared to LiDAR-based detection systems.
Our proposed collaborative camera-only 3D detection (CoCa3D) enables agents to share complementary information with each other through communication.
Results show that CoCa3D improves previous SOTA performances by 44.21% on DAIR-V2X, 30.60% on OPV2V+, 12.59% on CoPerception-UAVs+ for AP@70.
arXiv Detail & Related papers (2023-03-23T03:50:41Z) - APT-36K: A Large-scale Benchmark for Animal Pose Estimation and Tracking [77.87449881852062]
APT-36K is the first large-scale benchmark for animal pose estimation and tracking.
It consists of 2,400 video clips collected and filtered from 30 animal species with 15 frames for each video, resulting in 36,000 frames in total.
We benchmark several representative models on the following three tracks: (1) supervised animal pose estimation on a single frame under intra- and inter-domain transfer learning settings, (2) inter-species domain generalization test for unseen animals, and (3) animal pose estimation with animal tracking.
arXiv Detail & Related papers (2022-06-12T07:18:36Z) - MetaPose: Fast 3D Pose from Multiple Views without 3D Supervision [72.5863451123577]
We show how to train a neural model that can perform accurate 3D pose and camera estimation.
Our method outperforms both classical bundle adjustment and weakly-supervised monocular 3D baselines.
arXiv Detail & Related papers (2021-08-10T18:39:56Z) - AcinoSet: A 3D Pose Estimation Dataset and Baseline Models for Cheetahs
in the Wild [51.35013619649463]
We present an extensive dataset of free-running cheetahs in the wild, called AcinoSet.
The dataset contains 119,490 frames of multi-view synchronized high-speed video footage, camera calibration files and 7,588 human-annotated frames.
The resulting 3D trajectories, human-checked 3D ground truth, and an interactive tool to inspect the data is also provided.
arXiv Detail & Related papers (2021-03-24T15:54:11Z) - A first step towards automated species recognition from camera trap
images of mammals using AI in a European temperate forest [0.0]
This paper presents the implementation of the YOLOv5 architecture for automated labeling of camera trap images of mammals in the Bialowieza Forest (BF), Poland.
The camera trapping data were organized and harmonized using TRAPPER software, an open source application for managing large-scale wildlife monitoring projects.
The proposed image recognition pipeline achieved an average accuracy of 85% F1-score in the identification of the 12 most commonly occurring medium-size and large mammal species in BF.
arXiv Detail & Related papers (2021-03-19T22:48:03Z) - Exploiting Depth Information for Wildlife Monitoring [0.0]
We propose an automated camera trap-based approach to detect and identify animals using depth estimation.
To detect and identify individual animals, we propose a novel method D-Mask R-CNN for the so-called instance segmentation.
An experimental evaluation shows the benefit of the additional depth estimation in terms of improved average precision scores of the animal detection.
arXiv Detail & Related papers (2021-02-10T18:10:34Z) - Automatic Detection and Recognition of Individuals in Patterned Species [4.163860911052052]
We develop a framework for automatic detection and recognition of individuals in different patterned species.
We use the recently proposed Faster-RCNN object detection framework to efficiently detect animals in images.
We evaluate our recognition system on zebra and jaguar images to show generalization to other patterned species.
arXiv Detail & Related papers (2020-05-06T15:29:21Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.