Wild Motion Unleashed: Markerless 3D Kinematics and Force Estimation in
Cheetahs
- URL: http://arxiv.org/abs/2312.05879v1
- Date: Sun, 10 Dec 2023 13:14:58 GMT
- Title: Wild Motion Unleashed: Markerless 3D Kinematics and Force Estimation in
Cheetahs
- Authors: Zico da Silva, Stacy Shield, Penny E. Hudson, Alan M. Wilson, Fred
Nicolls and Amir Patel
- Abstract summary: We use data obtained from cheetahs in the wild to present a trajectory optimisation approach for estimating the 3D kinematics and joint torques of subjects remotely.
We are able to reconstruct the 3D kinematics with an average reprojection error of 17.69 pixels (62.94 $%$ PCK using the nose-to-eye(s) length segment as a threshold)
While the joint torques cannot be directly validated against ground truth data, the estimated torques agree with previous studies of quadrupeds in controlled settings.
- Score: 3.396214578939738
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The complex dynamics of animal manoeuvrability in the wild is extremely
challenging to study. The cheetah ($\textit{Acinonyx jubatus}$) is a perfect
example: despite great interest in its unmatched speed and manoeuvrability,
obtaining complete whole-body motion data from these animals remains an
unsolved problem. This is especially difficult in wild cheetahs, where it is
essential that the methods used are remote and do not constrain the animal's
motion. In this work, we use data obtained from cheetahs in the wild to present
a trajectory optimisation approach for estimating the 3D kinematics and joint
torques of subjects remotely. We call this approach kinetic full trajectory
estimation (K-FTE). We validate the method on a dataset comprising synchronised
video and force plate data. We are able to reconstruct the 3D kinematics with
an average reprojection error of 17.69 pixels (62.94 $\%$ PCK using the
nose-to-eye(s) length segment as a threshold), while the estimates produce an
average root-mean-square error of 171.3 N ($\approx$ 17.16 $\%$ of peak force
during stride) for the estimated ground reaction force when compared against
the force plate data. While the joint torques cannot be directly validated
against ground truth data, as no such data is available for cheetahs, the
estimated torques agree with previous studies of quadrupeds in controlled
settings. These results will enable deeper insight into the study of animal
locomotion in a more natural environment for both biologists and roboticists.
Related papers
- SINETRA: a Versatile Framework for Evaluating Single Neuron Tracking in Behaving Animals [7.039426581802364]
SINETRA is a versatile simulator that generates synthetic tracking data for particles on a deformable background.
This simulator produces annotated 2D and 3D videos that reflect the intricate movements seen in behaving animals like Hydra Vulgaris.
arXiv Detail & Related papers (2024-11-14T14:12:16Z) - BaboonLand Dataset: Tracking Primates in the Wild and Automating Behaviour Recognition from Drone Videos [0.8074955699721389]
This study presents a novel dataset from drone videos for baboon detection, tracking, and behavior recognition.
The baboon detection dataset was created by manually annotating all baboons in drone videos with bounding boxes.
The behavior recognition dataset was generated by converting tracks into mini-scenes, a video subregion centered on each animal.
arXiv Detail & Related papers (2024-05-27T23:09:37Z) - WildGEN: Long-horizon Trajectory Generation for Wildlife [3.8986045286948]
Trajectory generation is an important concern in pedestrian, vehicle, and wildlife movement studies.
We introduce WildGEN: a conceptual framework that addresses this challenge by employing a Variational Auto-encoders (VAEs) based method.
A subsequent post-processing step of the generated trajectories is performed based on smoothing filters to reduce excessive wandering.
arXiv Detail & Related papers (2023-12-30T05:08:28Z) - OmniMotionGPT: Animal Motion Generation with Limited Data [70.35662376853163]
We introduce AnimalML3D, the first text-animal motion dataset with 1240 animation sequences spanning 36 different animal identities.
We are able to generate animal motions with high diversity and fidelity, quantitatively and qualitatively outperforming the results of training human motion generation baselines on animal data.
arXiv Detail & Related papers (2023-11-30T07:14:00Z) - Occluded Human Body Capture with Self-Supervised Spatial-Temporal Motion
Prior [7.157324258813676]
We build the first 3D occluded motion dataset(OcMotion), which can be used for both training and testing.
A spatial-temporal layer is then designed to learn joint-level correlations.
Experimental results show that our method can generate accurate and coherent human motions from occluded videos with good generalization ability and runtime efficiency.
arXiv Detail & Related papers (2022-07-12T08:15:11Z) - APT-36K: A Large-scale Benchmark for Animal Pose Estimation and Tracking [77.87449881852062]
APT-36K is the first large-scale benchmark for animal pose estimation and tracking.
It consists of 2,400 video clips collected and filtered from 30 animal species with 15 frames for each video, resulting in 36,000 frames in total.
We benchmark several representative models on the following three tracks: (1) supervised animal pose estimation on a single frame under intra- and inter-domain transfer learning settings, (2) inter-species domain generalization test for unseen animals, and (3) animal pose estimation with animal tracking.
arXiv Detail & Related papers (2022-06-12T07:18:36Z) - One Million Scenes for Autonomous Driving: ONCE Dataset [91.94189514073354]
We introduce the ONCE dataset for 3D object detection in the autonomous driving scenario.
The data is selected from 144 driving hours, which is 20x longer than the largest 3D autonomous driving dataset available.
We reproduce and evaluate a variety of self-supervised and semi-supervised methods on the ONCE dataset.
arXiv Detail & Related papers (2021-06-21T12:28:08Z) - SGCN:Sparse Graph Convolution Network for Pedestrian Trajectory
Prediction [64.16212996247943]
We present a Sparse Graph Convolution Network(SGCN) for pedestrian trajectory prediction.
Specifically, the SGCN explicitly models the sparse directed interaction with a sparse directed spatial graph to capture adaptive interaction pedestrians.
visualizations indicate that our method can capture adaptive interactions between pedestrians and their effective motion tendencies.
arXiv Detail & Related papers (2021-04-04T03:17:42Z) - AcinoSet: A 3D Pose Estimation Dataset and Baseline Models for Cheetahs
in the Wild [51.35013619649463]
We present an extensive dataset of free-running cheetahs in the wild, called AcinoSet.
The dataset contains 119,490 frames of multi-view synchronized high-speed video footage, camera calibration files and 7,588 human-annotated frames.
The resulting 3D trajectories, human-checked 3D ground truth, and an interactive tool to inspect the data is also provided.
arXiv Detail & Related papers (2021-03-24T15:54:11Z) - Fast and Furious: Real Time End-to-End 3D Detection, Tracking and Motion
Forecasting with a Single Convolutional Net [93.51773847125014]
We propose a novel deep neural network that is able to jointly reason about 3D detection, tracking and motion forecasting given data captured by a 3D sensor.
Our approach performs 3D convolutions across space and time over a bird's eye view representation of the 3D world.
arXiv Detail & Related papers (2020-12-22T22:43:35Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.