SyDog: A Synthetic Dog Dataset for Improved 2D Pose Estimation
- URL: http://arxiv.org/abs/2108.00249v1
- Date: Sat, 31 Jul 2021 14:34:40 GMT
- Title: SyDog: A Synthetic Dog Dataset for Improved 2D Pose Estimation
- Authors: Moira Shooter, Charles Malleson, Adrian Hilton (University of Surrey)
- Abstract summary: SyDog is a synthetic dataset of dogs containing ground truth pose and bounding box coordinates.
We demonstrate that pose estimation models trained on SyDog achieve better performance than models trained purely on real data.
- Score: 3.411873646414169
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Estimating the pose of animals can facilitate the understanding of animal
motion which is fundamental in disciplines such as biomechanics, neuroscience,
ethology, robotics and the entertainment industry. Human pose estimation models
have achieved high performance due to the huge amount of training data
available. Achieving the same results for animal pose estimation is challenging
due to the lack of animal pose datasets. To address this problem we introduce
SyDog: a synthetic dataset of dogs containing ground truth pose and bounding
box coordinates which was generated using the game engine, Unity. We
demonstrate that pose estimation models trained on SyDog achieve better
performance than models trained purely on real data and significantly reduce
the need for the labour intensive labelling of images. We release the SyDog
dataset as a training and evaluation benchmark for research in animal motion.
Related papers
- ZebraPose: Zebra Detection and Pose Estimation using only Synthetic Data [0.2302001830524133]
We use synthetic data generated with a 3D simulator to obtain the first synthetic dataset that can be used for both detection and 2D pose estimation of zebras.
We extensively train and benchmark our detection and 2D pose estimation models on multiple real-world and synthetic datasets.
These experiments show how the models trained from scratch and only with synthetic data can consistently generalize to real-world images of zebras.
arXiv Detail & Related papers (2024-08-20T13:28:37Z) - PoseBench: Benchmarking the Robustness of Pose Estimation Models under Corruptions [57.871692507044344]
Pose estimation aims to accurately identify anatomical keypoints in humans and animals using monocular images.
Current models are typically trained and tested on clean data, potentially overlooking the corruption during real-world deployment.
We introduce PoseBench, a benchmark designed to evaluate the robustness of pose estimation models against real-world corruption.
arXiv Detail & Related papers (2024-06-20T14:40:17Z) - OmniMotionGPT: Animal Motion Generation with Limited Data [70.35662376853163]
We introduce AnimalML3D, the first text-animal motion dataset with 1240 animation sequences spanning 36 different animal identities.
We are able to generate animal motions with high diversity and fidelity, quantitatively and qualitatively outperforming the results of training human motion generation baselines on animal data.
arXiv Detail & Related papers (2023-11-30T07:14:00Z) - Animal3D: A Comprehensive Dataset of 3D Animal Pose and Shape [32.11280929126699]
We propose Animal3D, the first comprehensive dataset for mammal animal 3D pose and shape estimation.
Animal3D consists of 3379 images collected from 40 mammal species, high-quality annotations of 26 keypoints, and importantly the pose and shape parameters of the SMAL model.
Based on the Animal3D dataset, we benchmark representative shape and pose estimation models at: (1) supervised learning from only the Animal3D data, (2) synthetic to real transfer from synthetically generated images, and (3) fine-tuning human pose and shape estimation models.
arXiv Detail & Related papers (2023-08-22T18:57:07Z) - Prior-Aware Synthetic Data to the Rescue: Animal Pose Estimation with
Very Limited Real Data [18.06492246414256]
We present a data efficient strategy for pose estimation in quadrupeds that requires only a small amount of real images from the target animal.
It is confirmed that fine-tuning a backbone network with pretrained weights on generic image datasets such as ImageNet can mitigate the high demand for target animal pose data.
We introduce a prior-aware synthetic animal data generation pipeline called PASyn to augment the animal pose data essential for robust pose estimation.
arXiv Detail & Related papers (2022-08-30T01:17:50Z) - StyleGAN-Human: A Data-Centric Odyssey of Human Generation [96.7080874757475]
This work takes a data-centric perspective and investigates multiple critical aspects in "data engineering"
We collect and annotate a large-scale human image dataset with over 230K samples capturing diverse poses and textures.
We rigorously investigate three essential factors in data engineering for StyleGAN-based human generation, namely data size, data distribution, and data alignment.
arXiv Detail & Related papers (2022-04-25T17:55:08Z) - DynaDog+T: A Parametric Animal Model for Synthetic Canine Image
Generation [23.725295519857976]
We introduce a parametric canine model, DynaDog+T, for generating synthetic canine images and data.
We use this data for a common computer vision task, binary segmentation, which would otherwise be difficult due to the lack of available data.
arXiv Detail & Related papers (2021-07-15T13:53:10Z) - Unsupervised Shape and Pose Disentanglement for 3D Meshes [49.431680543840706]
We present a simple yet effective approach to learn disentangled shape and pose representations in an unsupervised setting.
We use a combination of self-consistency and cross-consistency constraints to learn pose and shape space from registered meshes.
We demonstrate the usefulness of learned representations through a number of tasks including pose transfer and shape retrieval.
arXiv Detail & Related papers (2020-07-22T11:00:27Z) - Cascaded deep monocular 3D human pose estimation with evolutionary
training data [76.3478675752847]
Deep representation learning has achieved remarkable accuracy for monocular 3D human pose estimation.
This paper proposes a novel data augmentation method that is scalable for massive amount of training data.
Our method synthesizes unseen 3D human skeletons based on a hierarchical human representation and synthesizings inspired by prior knowledge.
arXiv Detail & Related papers (2020-06-14T03:09:52Z) - Deformation-aware Unpaired Image Translation for Pose Estimation on
Laboratory Animals [56.65062746564091]
We aim to capture the pose of neuroscience model organisms, without using any manual supervision, to study how neural circuits orchestrate behaviour.
Our key contribution is the explicit and independent modeling of appearance, shape and poses in an unpaired image translation framework.
We demonstrate improved pose estimation accuracy on Drosophila melanogaster (fruit fly), Caenorhabditis elegans (worm) and Danio rerio (zebrafish)
arXiv Detail & Related papers (2020-01-23T15:34:11Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.