Tracking the Flight: Exploring a Computational Framework for Analyzing Escape Responses in Plains Zebra (Equus quagga)
- URL: http://arxiv.org/abs/2505.16882v2
- Date: Fri, 23 May 2025 20:14:12 GMT
- Title: Tracking the Flight: Exploring a Computational Framework for Analyzing Escape Responses in Plains Zebra (Equus quagga)
- Authors: Isla Duporge, Sofia Minano, Nikoloz Sirmpilatze, Igor Tatarnikov, Scott Wolf, Adam L. Tyson, Daniel Rubenstein,
- Abstract summary: This study evaluates three approaches to separating animal movement from drone motion.<n>Using the best-performing method, we extract individual trajectories and identify key behavioral patterns.<n>These insights highlight the method's effectiveness and its potential to scale to larger datasets.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Ethological research increasingly benefits from the growing affordability and accessibility of drones, which enable the capture of high-resolution footage of animal movement at fine spatial and temporal scales. However, analyzing such footage presents the technical challenge of separating animal movement from drone motion. While non-trivial, computer vision techniques such as image registration and Structure-from-Motion (SfM) offer practical solutions. For conservationists, open-source tools that are user-friendly, require minimal setup, and deliver timely results are especially valuable for efficient data interpretation. This study evaluates three approaches: a bioimaging-based registration technique, an SfM pipeline, and a hybrid interpolation method. We apply these to a recorded escape event involving 44 plains zebras, captured in a single drone video. Using the best-performing method, we extract individual trajectories and identify key behavioral patterns: increased alignment (polarization) during escape, a brief widening of spacing just before stopping, and tighter coordination near the group's center. These insights highlight the method's effectiveness and its potential to scale to larger datasets, contributing to broader investigations of collective animal behavior.
Related papers
- RareSpot: Spotting Small and Rare Wildlife in Aerial Imagery with Multi-Scale Consistency and Context-Aware Augmentation [6.756718879272925]
RareSpot is a robust detection framework integrating multi-scale consistency learning and context-aware augmentation.<n>Our method achieves state-of-the-art performance, improving detection accuracy by over 35% compared to baseline methods.
arXiv Detail & Related papers (2025-06-23T20:03:43Z) - Improving Small Drone Detection Through Multi-Scale Processing and Data Augmentation [2.522137108227868]
This work introduces a drone detection methodology built upon the medium-sized YOLOv11 object detection model.<n>To enhance its performance on small targets, we implemented a multi-scale approach in which the input image is processed both as a whole and in segmented parts, with subsequent prediction aggregation.<n>The proposed approach attained a top-3 ranking in the 8th WOSDETC Drone-vsBird Detection Grand Challenge, held at the 2025 International Joint Conference on Neural Networks.
arXiv Detail & Related papers (2025-04-27T20:06:55Z) - MMLA: Multi-Environment, Multi-Species, Low-Altitude Aerial Footage Dataset [3.7188931723069443]
Real-time wildlife detection in drone imagery is critical for numerous applications, including animal ecology, conservation, and biodiversity monitoring.<n>We present a novel multi-species, multi-environment, low-altitude aerial footage (MMLA) dataset.<n>Results demonstrate significant performance disparities across locations and species-specific detection variations.
arXiv Detail & Related papers (2025-04-10T13:40:27Z) - Consistent multi-animal pose estimation in cattle using dynamic Kalman filter based tracking [0.0]
KeySORT is an adaptive Kalman filter to construct tracklets in a bounding-box free manner, significantly improving the temporal consistency of detected keypoints.<n>Our test results indicate our algorithm is able to detect up to 80% of the ground truth keypoints with high accuracy.
arXiv Detail & Related papers (2025-03-13T15:15:54Z) - A Cross-Scene Benchmark for Open-World Drone Active Tracking [54.235808061746525]
Drone Visual Active Tracking aims to autonomously follow a target object by controlling the motion system based on visual observations.<n>We propose a unified cross-scene cross-domain benchmark for open-world drone active tracking called DAT.<n>We also propose a reinforcement learning-based drone tracking method called R-VAT.
arXiv Detail & Related papers (2024-12-01T09:37:46Z) - A motion-based compression algorithm for resource-constrained video camera traps [4.349838917565205]
We introduce a new motion analysis-based video compression algorithm specifically designed for camera traps.
The algorithm identifies and stores only image regions depicting motion relevant to motion pollination monitoring.
Our experiments demonstrate the algorithm's capability to preserve critical information for insect behaviour analysis.
arXiv Detail & Related papers (2024-05-23T10:39:33Z) - WildGEN: Long-horizon Trajectory Generation for Wildlife [3.8986045286948]
Trajectory generation is an important concern in pedestrian, vehicle, and wildlife movement studies.
We introduce WildGEN: a conceptual framework that addresses this challenge by employing a Variational Auto-encoders (VAEs) based method.
A subsequent post-processing step of the generated trajectories is performed based on smoothing filters to reduce excessive wandering.
arXiv Detail & Related papers (2023-12-30T05:08:28Z) - Multimodal Foundation Models for Zero-shot Animal Species Recognition in
Camera Trap Images [57.96659470133514]
Motion-activated camera traps constitute an efficient tool for tracking and monitoring wildlife populations across the globe.
Supervised learning techniques have been successfully deployed to analyze such imagery, however training such techniques requires annotations from experts.
Reducing the reliance on costly labelled data has immense potential in developing large-scale wildlife tracking solutions with markedly less human labor.
arXiv Detail & Related papers (2023-11-02T08:32:00Z) - AntPivot: Livestream Highlight Detection via Hierarchical Attention
Mechanism [64.70568612993416]
We formulate a new task Livestream Highlight Detection, discuss and analyze the difficulties listed above and propose a novel architecture AntPivot to solve this problem.
We construct a fully-annotated dataset AntHighlight to instantiate this task and evaluate the performance of our model.
arXiv Detail & Related papers (2022-06-10T05:58:11Z) - AcinoSet: A 3D Pose Estimation Dataset and Baseline Models for Cheetahs
in the Wild [51.35013619649463]
We present an extensive dataset of free-running cheetahs in the wild, called AcinoSet.
The dataset contains 119,490 frames of multi-view synchronized high-speed video footage, camera calibration files and 7,588 human-annotated frames.
The resulting 3D trajectories, human-checked 3D ground truth, and an interactive tool to inspect the data is also provided.
arXiv Detail & Related papers (2021-03-24T15:54:11Z) - Batch Exploration with Examples for Scalable Robotic Reinforcement
Learning [63.552788688544254]
Batch Exploration with Examples (BEE) explores relevant regions of the state-space guided by a modest number of human provided images of important states.
BEE is able to tackle challenging vision-based manipulation tasks both in simulation and on a real Franka robot.
arXiv Detail & Related papers (2020-10-22T17:49:25Z) - A Flow Base Bi-path Network for Cross-scene Video Crowd Understanding in
Aerial View [93.23947591795897]
In this paper, we strive to tackle the challenges and automatically understand the crowd from the visual data collected from drones.
To alleviate the background noise generated in cross-scene testing, a double-stream crowd counting model is proposed.
To tackle the crowd density estimation problem under extreme dark environments, we introduce synthetic data generated by game Grand Theft Auto V(GTAV)
arXiv Detail & Related papers (2020-09-29T01:48:24Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.