ChimpACT: A Longitudinal Dataset for Understanding Chimpanzee Behaviors
- URL: http://arxiv.org/abs/2310.16447v1
- Date: Wed, 25 Oct 2023 08:11:02 GMT
- Title: ChimpACT: A Longitudinal Dataset for Understanding Chimpanzee Behaviors
- Authors: Xiaoxuan Ma, Stephan P. Kaufhold, Jiajun Su, Wentao Zhu, Jack
Terwilliger, Andres Meza, Yixin Zhu, Federico Rossano, Yizhou Wang
- Abstract summary: ChimpACT features videos of a group of over 20 chimpanzees residing at the Leipzig Zoo, Germany.
ChimpACT is both comprehensive and challenging, consisting of 163 videos with a cumulative 160,500 frames.
- Score: 32.72634137202146
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Understanding the behavior of non-human primates is crucial for improving
animal welfare, modeling social behavior, and gaining insights into
distinctively human and phylogenetically shared behaviors. However, the lack of
datasets on non-human primate behavior hinders in-depth exploration of primate
social interactions, posing challenges to research on our closest living
relatives. To address these limitations, we present ChimpACT, a comprehensive
dataset for quantifying the longitudinal behavior and social relations of
chimpanzees within a social group. Spanning from 2015 to 2018, ChimpACT
features videos of a group of over 20 chimpanzees residing at the Leipzig Zoo,
Germany, with a particular focus on documenting the developmental trajectory of
one young male, Azibo. ChimpACT is both comprehensive and challenging,
consisting of 163 videos with a cumulative 160,500 frames, each richly
annotated with detection, identification, pose estimation, and fine-grained
spatiotemporal behavior labels. We benchmark representative methods of three
tracks on ChimpACT: (i) tracking and identification, (ii) pose estimation, and
(iii) spatiotemporal action detection of the chimpanzees. Our experiments
reveal that ChimpACT offers ample opportunities for both devising new methods
and adapting existing ones to solve fundamental computer vision tasks applied
to chimpanzee groups, such as detection, pose estimation, and behavior
analysis, ultimately deepening our comprehension of communication and sociality
in non-human primates.
Related papers
- AlphaChimp: Tracking and Behavior Recognition of Chimpanzees [29.14013458574676]
We develop an end-to-end approach that simultaneously detects chimpanzee positions and estimates behavior categories from videos.
AlphaChimp achieves 10% higher tracking accuracy and a 20% improvement in behavior recognition compared to state-of-the-art methods.
Our approach bridges the gap between computer vision and primatology, enhancing technical capabilities and deepening our understanding of primate communication and sociality.
arXiv Detail & Related papers (2024-10-22T16:08:09Z) - From Forest to Zoo: Great Ape Behavior Recognition with ChimpBehave [0.0]
We introduce ChimpBehave, a novel dataset featuring over 2 hours of video (approximately 193,000 video frames) of zoo-housed chimpanzees.
ChimpBehave meticulously annotated with bounding boxes and behavior labels for action recognition.
We benchmark our dataset using a state-of-the-art CNN-based action recognition model.
arXiv Detail & Related papers (2024-05-30T13:11:08Z) - Behaviour Modelling of Social Animals via Causal Structure Discovery and
Graph Neural Networks [15.542220566525021]
We propose a method to build behavioural models using causal structure discovery and graph neural networks for time series.
We apply this method to a mob of meerkats in a zoo environment and study its ability to predict future actions.
arXiv Detail & Related papers (2023-12-21T23:34:08Z) - Learning Human Action Recognition Representations Without Real Humans [66.61527869763819]
We present a benchmark that leverages real-world videos with humans removed and synthetic data containing virtual humans to pre-train a model.
We then evaluate the transferability of the representation learned on this data to a diverse set of downstream action recognition benchmarks.
Our approach outperforms previous baselines by up to 5%.
arXiv Detail & Related papers (2023-11-10T18:38:14Z) - LISBET: a machine learning model for the automatic segmentation of social behavior motifs [0.0]
We introduce LISBET (LISBET Is a Social BEhavior Transformer), a machine learning model for detecting and segmenting social interactions.
Using self-supervised learning on body tracking data, our model eliminates the need for extensive human annotation.
In vivo electrophysiology revealed distinct neural signatures in the Ventral Tegmental Area corresponding to motifs identified by our model.
arXiv Detail & Related papers (2023-11-07T15:35:17Z) - Meerkat Behaviour Recognition Dataset [3.53348643468069]
We introduce a large meerkat behaviour recognition video dataset with diverse annotated behaviours.
This dataset includes videos from two positions within the meerkat enclosure at the Wellington Zoo (Wellington, New Zealand)
arXiv Detail & Related papers (2023-06-20T06:50:50Z) - Persistent Animal Identification Leveraging Non-Visual Markers [71.14999745312626]
We aim to locate and provide a unique identifier for each mouse in a cluttered home-cage environment through time.
This is a very challenging problem due to (i) the lack of distinguishing visual features for each mouse, and (ii) the close confines of the scene with constant occlusion.
Our approach achieves 77% accuracy on this animal identification problem, and is able to reject spurious detections when the animals are hidden.
arXiv Detail & Related papers (2021-12-13T17:11:32Z) - Overcoming the Domain Gap in Neural Action Representations [60.47807856873544]
3D pose data can now be reliably extracted from multi-view video sequences without manual intervention.
We propose to use it to guide the encoding of neural action representations together with a set of neural and behavioral augmentations.
To reduce the domain gap, during training, we swap neural and behavioral data across animals that seem to be performing similar actions.
arXiv Detail & Related papers (2021-12-02T12:45:46Z) - AP-10K: A Benchmark for Animal Pose Estimation in the Wild [83.17759850662826]
We propose AP-10K, the first large-scale benchmark for general animal pose estimation.
AP-10K consists of 10,015 images collected and filtered from 23 animal families and 60 species.
Results provide sound empirical evidence on the superiority of learning from diverse animals species in terms of both accuracy and generalization ability.
arXiv Detail & Related papers (2021-08-28T10:23:34Z) - Muti-view Mouse Social Behaviour Recognition with Deep Graphical Model [124.26611454540813]
Social behaviour analysis of mice is an invaluable tool to assess therapeutic efficacy of neurodegenerative diseases.
Because of the potential to create rich descriptions of mouse social behaviors, the use of multi-view video recordings for rodent observations is increasingly receiving much attention.
We propose a novel multiview latent-attention and dynamic discriminative model that jointly learns view-specific and view-shared sub-structures.
arXiv Detail & Related papers (2020-11-04T18:09:58Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.