Direct and Sparse Deformable Tracking
- URL: http://arxiv.org/abs/2109.07370v1
- Date: Wed, 15 Sep 2021 15:28:10 GMT
- Title: Direct and Sparse Deformable Tracking
- Authors: Jose Lamarca, Juan J. Gomez Rodriguez, Juan D. Tardos and J.M.M.
Montiel
- Abstract summary: We introduce a novel deformable camera tracking method with a local deformation model for each point.
Thanks to a direct photometric error cost function, we can track the position and orientation of the surfel without an explicit global deformation model.
- Score: 4.874780144224057
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Deformable Monocular SLAM algorithms recover the localization of a camera in
an unknown deformable environment. Current approaches use a template-based
deformable tracking to recover the camera pose and the deformation of the map.
These template-based methods use an underlying global deformation model. In
this paper, we introduce a novel deformable camera tracking method with a local
deformation model for each point. Each map point is defined as a single
textured surfel that moves independently of the other map points. Thanks to a
direct photometric error cost function, we can track the position and
orientation of the surfel without an explicit global deformation model. In our
experiments, we validate the proposed system and observe that our local
deformation model estimates more accurately and robustly the targeted
deformations of the map in both laboratory-controlled experiments and in-body
scenarios undergoing non-isometric deformations, with changing topology or
discontinuities.
Related papers
- The Drunkard's Odometry: Estimating Camera Motion in Deforming Scenes [79.00228778543553]
This dataset is the first large set of exploratory camera trajectories with ground truth inside 3D scenes.
Simulations in realistic 3D buildings lets us obtain a vast amount of data and ground truth labels.
We present a novel deformable odometry method, dubbed the Drunkard's Odometry, which decomposes optical flow estimates into rigid-body camera motion.
arXiv Detail & Related papers (2023-06-29T13:09:31Z) - Neural Shape Deformation Priors [14.14047635248036]
We present Neural Shape Deformation Priors, a novel method for shape manipulation.
We learn the deformation behavior based on the underlying geometric properties of a shape.
Our method can be applied to challenging deformations and generalizes well to unseen deformations.
arXiv Detail & Related papers (2022-10-11T17:03:25Z) - Tracking monocular camera pose and deformation for SLAM inside the human
body [2.094821665776961]
We propose a novel method to simultaneously track the camera pose and the 3D scene deformation.
The method uses an illumination-invariant photometric method to track image features and estimates camera motion and deformation.
Our results in simulated colonoscopies show the method's accuracy and robustness in complex scenes under increasing levels of deformation.
arXiv Detail & Related papers (2022-04-18T13:25:23Z) - Pop-Out Motion: 3D-Aware Image Deformation via Learning the Shape
Laplacian [58.704089101826774]
We present a 3D-aware image deformation method with minimal restrictions on shape category and deformation type.
We take a supervised learning-based approach to predict the shape Laplacian of the underlying volume of a 3D reconstruction represented as a point cloud.
In the experiments, we present our results of deforming 2D character and clothed human images.
arXiv Detail & Related papers (2022-03-29T04:57:18Z) - Animatable Implicit Neural Representations for Creating Realistic
Avatars from Videos [63.16888987770885]
This paper addresses the challenge of reconstructing an animatable human model from a multi-view video.
We introduce a pose-driven deformation field based on the linear blend skinning algorithm.
We show that our approach significantly outperforms recent human modeling methods.
arXiv Detail & Related papers (2022-03-15T17:56:59Z) - A Model for Multi-View Residual Covariances based on Perspective
Deformation [88.21738020902411]
We derive a model for the covariance of the visual residuals in multi-view SfM, odometry and SLAM setups.
We validate our model with synthetic and real data and integrate it into photometric and feature-based Bundle Adjustment.
arXiv Detail & Related papers (2022-02-01T21:21:56Z) - Extracting Deformation-Aware Local Features by Learning to Deform [3.364554138758565]
We present a new approach to compute features from still images that are robust to non-rigid deformations.
We train the model architecture end-to-end by applying non-rigid deformations to objects in a simulated environment.
Experiments show that our method outperforms state-of-the-art handcrafted, learning-based image, and RGB-D descriptors in different datasets.
arXiv Detail & Related papers (2021-11-20T15:46:33Z) - Identity-Disentangled Neural Deformation Model for Dynamic Meshes [8.826835863410109]
We learn a neural deformation model that disentangles identity-induced shape variations from pose-dependent deformations using implicit neural functions.
We propose two methods to integrate global pose alignment with our neural deformation model.
Our method also outperforms traditional skeleton-driven models in reconstructing surface details such as palm prints or tendons without limitations from a fixed template.
arXiv Detail & Related papers (2021-09-30T17:43:06Z) - SNARF: Differentiable Forward Skinning for Animating Non-Rigid Neural
Implicit Shapes [117.76767853430243]
We introduce SNARF, which combines the advantages of linear blend skinning for polygonal meshes with neural implicit surfaces.
We propose a forward skinning model that finds all canonical correspondences of any deformed point using iterative root finding.
Compared to state-of-the-art neural implicit representations, our approach generalizes better to unseen poses while preserving accuracy.
arXiv Detail & Related papers (2021-04-08T17:54:59Z) - Dense Non-Rigid Structure from Motion: A Manifold Viewpoint [162.88686222340962]
Non-Rigid Structure-from-Motion (NRSfM) problem aims to recover 3D geometry of a deforming object from its 2D feature correspondences across multiple frames.
We show that our approach significantly improves accuracy, scalability, and robustness against noise.
arXiv Detail & Related papers (2020-06-15T09:15:54Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.