Evaluation of deep lift pose models for 3D rodent pose estimation based
on geometrically triangulated data
- URL: http://arxiv.org/abs/2106.12993v1
- Date: Thu, 24 Jun 2021 13:08:33 GMT
- Title: Evaluation of deep lift pose models for 3D rodent pose estimation based
on geometrically triangulated data
- Authors: Indrani Sarkar, Indranil Maji, Charitha Omprakash, Sebastian Stober,
Sanja Mikulovic, Pavol Bauer
- Abstract summary: Behavior is typically studied in terms of pose changes, which are ideally captured in three dimensions.
This requires triangulation over a multi-camera system which view the animal from different angles.
Here we propose the usage of lift-pose models that allow for robust 3D pose estimation of freely moving rodents from a single view camera view.
- Score: 1.84316002191515
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: The assessment of laboratory animal behavior is of central interest in modern
neuroscience research. Behavior is typically studied in terms of pose changes,
which are ideally captured in three dimensions. This requires triangulation
over a multi-camera system which view the animal from different angles.
However, this is challenging in realistic laboratory setups due to occlusions
and other technical constrains. Here we propose the usage of lift-pose models
that allow for robust 3D pose estimation of freely moving rodents from a single
view camera view. To obtain high-quality training data for the pose-lifting, we
first perform geometric calibration in a camera setup involving bottom as well
as side views of the behaving animal. We then evaluate the performance of two
previously proposed model architectures under given inference perspectives and
conclude that reliable 3D pose inference can be obtained using temporal
convolutions. With this work we would like to contribute to a more robust and
diverse behavior tracking of freely moving rodents for a wide range of
experiments and setups in the neuroscience community.
Related papers
- 3D-Aware Hypothesis & Verification for Generalizable Relative Object
Pose Estimation [69.73691477825079]
We present a new hypothesis-and-verification framework to tackle the problem of generalizable object pose estimation.
To measure reliability, we introduce a 3D-aware verification that explicitly applies 3D transformations to the 3D object representations learned from the two input images.
arXiv Detail & Related papers (2023-10-05T13:34:07Z) - Predictive Modeling of Equine Activity Budgets Using a 3D Skeleton
Reconstructed from Surveillance Recordings [0.8602553195689513]
We present a pipeline to reconstruct the 3D pose of a horse from 4 simultaneous surveillance camera recordings.
Our environment poses interesting challenges to tackle, such as limited field view of the cameras and a relatively closed and small environment.
arXiv Detail & Related papers (2023-06-08T16:00:04Z) - Few-View Object Reconstruction with Unknown Categories and Camera Poses [80.0820650171476]
This work explores reconstructing general real-world objects from a few images without known camera poses or object categories.
The crux of our work is solving two fundamental 3D vision problems -- shape reconstruction and pose estimation.
Our method FORGE predicts 3D features from each view and leverages them in conjunction with the input images to establish cross-view correspondence.
arXiv Detail & Related papers (2022-12-08T18:59:02Z) - State of the Art in Dense Monocular Non-Rigid 3D Reconstruction [100.9586977875698]
3D reconstruction of deformable (or non-rigid) scenes from a set of monocular 2D image observations is a long-standing and actively researched area of computer vision and graphics.
This survey focuses on state-of-the-art methods for dense non-rigid 3D reconstruction of various deformable objects and composite scenes from monocular videos or sets of monocular views.
arXiv Detail & Related papers (2022-10-27T17:59:53Z) - MetaPose: Fast 3D Pose from Multiple Views without 3D Supervision [72.5863451123577]
We show how to train a neural model that can perform accurate 3D pose and camera estimation.
Our method outperforms both classical bundle adjustment and weakly-supervised monocular 3D baselines.
arXiv Detail & Related papers (2021-08-10T18:39:56Z) - Kinematic-Structure-Preserved Representation for Unsupervised 3D Human
Pose Estimation [58.72192168935338]
Generalizability of human pose estimation models developed using supervision on large-scale in-studio datasets remains questionable.
We propose a novel kinematic-structure-preserved unsupervised 3D pose estimation framework, which is not restrained by any paired or unpaired weak supervisions.
Our proposed model employs three consecutive differentiable transformations named as forward-kinematics, camera-projection and spatial-map transformation.
arXiv Detail & Related papers (2020-06-24T23:56:33Z) - Self-Supervised 3D Human Pose Estimation via Part Guided Novel Image
Synthesis [72.34794624243281]
We propose a self-supervised learning framework to disentangle variations from unlabeled video frames.
Our differentiable formalization, bridging the representation gap between the 3D pose and spatial part maps, allows us to operate on videos with diverse camera movements.
arXiv Detail & Related papers (2020-04-09T07:55:01Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.