Meerkat Behaviour Recognition Dataset
- URL: http://arxiv.org/abs/2306.11326v1
- Date: Tue, 20 Jun 2023 06:50:50 GMT
- Title: Meerkat Behaviour Recognition Dataset
- Authors: Mitchell Rogers, Ga\"el Gendron, David Arturo Soriano Valdez, Mihailo
Azhar, Yang Chen, Shahrokh Heidari, Caleb Perelini, Padriac O'Leary, Kobe
Knowles, Izak Tait, Simon Eyre, Michael Witbrock, and Patrice Delmas
- Abstract summary: We introduce a large meerkat behaviour recognition video dataset with diverse annotated behaviours.
This dataset includes videos from two positions within the meerkat enclosure at the Wellington Zoo (Wellington, New Zealand)
- Score: 3.53348643468069
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: Recording animal behaviour is an important step in evaluating the well-being
of animals and further understanding the natural world. Current methods for
documenting animal behaviour within a zoo setting, such as scan sampling,
require excessive human effort, are unfit for around-the-clock monitoring, and
may produce human-biased results. Several animal datasets already exist that
focus predominantly on wildlife interactions, with some extending to action or
behaviour recognition. However, there is limited data in a zoo setting or data
focusing on the group behaviours of social animals. We introduce a large
meerkat (Suricata Suricatta) behaviour recognition video dataset with diverse
annotated behaviours, including group social interactions, tracking of
individuals within the camera view, skewed class distribution, and varying
illumination conditions. This dataset includes videos from two positions within
the meerkat enclosure at the Wellington Zoo (Wellington, New Zealand), with
848,400 annotated frames across 20 videos and 15 unannotated videos.
Related papers
- From Forest to Zoo: Great Ape Behavior Recognition with ChimpBehave [0.0]
We introduce ChimpBehave, a novel dataset featuring over 2 hours of video (approximately 193,000 video frames) of zoo-housed chimpanzees.
ChimpBehave meticulously annotated with bounding boxes and behavior labels for action recognition.
We benchmark our dataset using a state-of-the-art CNN-based action recognition model.
arXiv Detail & Related papers (2024-05-30T13:11:08Z) - AnimateZoo: Zero-shot Video Generation of Cross-Species Animation via Subject Alignment [64.02822911038848]
We present AnimateZoo, a zero-shot diffusion-based video generator to produce animal animations.
Key technique used in our AnimateZoo is subject alignment, which includes two steps.
Our model is capable of generating videos characterized by accurate movements, consistent appearance, and high-fidelity frames.
arXiv Detail & Related papers (2024-04-07T12:57:41Z) - Computer Vision for Primate Behavior Analysis in the Wild [61.08941894580172]
Video-based behavioral monitoring has great potential for transforming how we study animal cognition and behavior.
There is still a fairly large gap between the exciting prospects and what can actually be achieved in practice today.
arXiv Detail & Related papers (2024-01-29T18:59:56Z) - OmniMotionGPT: Animal Motion Generation with Limited Data [70.35662376853163]
We introduce AnimalML3D, the first text-animal motion dataset with 1240 animation sequences spanning 36 different animal identities.
We are able to generate animal motions with high diversity and fidelity, quantitatively and qualitatively outperforming the results of training human motion generation baselines on animal data.
arXiv Detail & Related papers (2023-11-30T07:14:00Z) - Multimodal Foundation Models for Zero-shot Animal Species Recognition in
Camera Trap Images [57.96659470133514]
Motion-activated camera traps constitute an efficient tool for tracking and monitoring wildlife populations across the globe.
Supervised learning techniques have been successfully deployed to analyze such imagery, however training such techniques requires annotations from experts.
Reducing the reliance on costly labelled data has immense potential in developing large-scale wildlife tracking solutions with markedly less human labor.
arXiv Detail & Related papers (2023-11-02T08:32:00Z) - ChimpACT: A Longitudinal Dataset for Understanding Chimpanzee Behaviors [32.72634137202146]
ChimpACT features videos of a group of over 20 chimpanzees residing at the Leipzig Zoo, Germany.
ChimpACT is both comprehensive and challenging, consisting of 163 videos with a cumulative 160,500 frames.
arXiv Detail & Related papers (2023-10-25T08:11:02Z) - MammalNet: A Large-scale Video Benchmark for Mammal Recognition and
Behavior Understanding [38.3767550066302]
MammalNet is a large-scale animal behavior dataset with taxonomy-guided annotations of mammals and their common behaviors.
It contains over 18K videos totaling 539 hours, which is 10 times larger than the largest existing animal behavior dataset.
We establish three benchmarks on MammalNet: standard animal and behavior recognition, compositional low-shot animal and behavior recognition, and behavior detection.
arXiv Detail & Related papers (2023-06-01T11:45:33Z) - Multi-view Tracking, Re-ID, and Social Network Analysis of a Flock of
Visually Similar Birds in an Outdoor Aviary [32.19504891200443]
We introduce a system for studying the behavioral dynamics of a group of songbirds as they move throughout a 3D aviary.
We study the complexities that arise when tracking a group of closely interacting animals in three dimensions and introduce a novel dataset for evaluating multi-view trackers.
arXiv Detail & Related papers (2022-12-01T04:23:18Z) - APT-36K: A Large-scale Benchmark for Animal Pose Estimation and Tracking [77.87449881852062]
APT-36K is the first large-scale benchmark for animal pose estimation and tracking.
It consists of 2,400 video clips collected and filtered from 30 animal species with 15 frames for each video, resulting in 36,000 frames in total.
We benchmark several representative models on the following three tracks: (1) supervised animal pose estimation on a single frame under intra- and inter-domain transfer learning settings, (2) inter-species domain generalization test for unseen animals, and (3) animal pose estimation with animal tracking.
arXiv Detail & Related papers (2022-06-12T07:18:36Z) - Animal Kingdom: A Large and Diverse Dataset for Animal Behavior
Understanding [4.606145900630665]
We create a large and diverse dataset, Animal Kingdom, that provides multiple annotated tasks.
Our dataset contains 50 hours of annotated videos to localize relevant animal behavior segments.
We propose a Collaborative Action Recognition (CARe) model that learns general and specific features for action recognition with unseen new animals.
arXiv Detail & Related papers (2022-04-18T02:05:15Z) - Florida Wildlife Camera Trap Dataset [48.99466876948454]
We introduce a challenging wildlife camera trap classification dataset collected from two different locations in Southwestern Florida.
The dataset consists of 104,495 images featuring visually similar species, varying illumination conditions, skewed class distribution, and including samples of endangered species.
arXiv Detail & Related papers (2021-06-23T18:53:15Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.