Dense Non-Rigid Structure from Motion: A Manifold Viewpoint
- URL: http://arxiv.org/abs/2006.09197v1
- Date: Mon, 15 Jun 2020 09:15:54 GMT
- Title: Dense Non-Rigid Structure from Motion: A Manifold Viewpoint
- Authors: Suryansh Kumar, Luc Van Gool, Carlos E. P. de Oliveira, Anoop Cherian,
Yuchao Dai, Hongdong Li
- Abstract summary: Non-Rigid Structure-from-Motion (NRSfM) problem aims to recover 3D geometry of a deforming object from its 2D feature correspondences across multiple frames.
We show that our approach significantly improves accuracy, scalability, and robustness against noise.
- Score: 162.88686222340962
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Non-Rigid Structure-from-Motion (NRSfM) problem aims to recover 3D geometry
of a deforming object from its 2D feature correspondences across multiple
frames. Classical approaches to this problem assume a small number of feature
points and, ignore the local non-linearities of the shape deformation, and
therefore, struggles to reliably model non-linear deformations. Furthermore,
available dense NRSfM algorithms are often hurdled by scalability,
computations, noisy measurements and, restricted to model just global
deformation. In this paper, we propose algorithms that can overcome these
limitations with the previous methods and, at the same time, can recover a
reliable dense 3D structure of a non-rigid object with higher accuracy.
Assuming that a deforming shape is composed of a union of local linear subspace
and, span a global low-rank space over multiple frames enables us to
efficiently model complex non-rigid deformations. To that end, each local
linear subspace is represented using Grassmannians and, the global 3D shape
across multiple frames is represented using a low-rank representation. We show
that our approach significantly improves accuracy, scalability, and robustness
against noise. Also, our representation naturally allows for simultaneous
reconstruction and clustering framework which in general is observed to be more
suitable for NRSfM problems. Our method currently achieves leading performance
on the standard benchmark datasets.
Related papers
- Flattening-Net: Deep Regular 2D Representation for 3D Point Cloud
Analysis [66.49788145564004]
We present an unsupervised deep neural architecture called Flattening-Net to represent irregular 3D point clouds of arbitrary geometry and topology.
Our methods perform favorably against the current state-of-the-art competitors.
arXiv Detail & Related papers (2022-12-17T15:05:25Z) - A Scalable Combinatorial Solver for Elastic Geometrically Consistent 3D
Shape Matching [69.14632473279651]
We present a scalable algorithm for globally optimizing over the space of geometrically consistent mappings between 3D shapes.
We propose a novel primal coupled with a Lagrange dual problem that is several orders of magnitudes faster than previous solvers.
arXiv Detail & Related papers (2022-04-27T09:47:47Z) - Disentangling Geometric Deformation Spaces in Generative Latent Shape
Models [5.582957809895198]
A complete representation of 3D objects requires characterizing the space of deformations in an interpretable manner.
We improve on a prior generative model of disentanglement for 3D shapes, wherein the space of object geometry is factorized into rigid orientation, non-rigid pose, and intrinsic shape.
The resulting model can be trained from raw 3D shapes, without correspondences, labels, or even rigid alignment.
arXiv Detail & Related papers (2021-02-27T06:54:31Z) - ResNet-LDDMM: Advancing the LDDMM Framework Using Deep Residual Networks [86.37110868126548]
In this work, we make use of deep residual neural networks to solve the non-stationary ODE (flow equation) based on a Euler's discretization scheme.
We illustrate these ideas on diverse registration problems of 3D shapes under complex topology-preserving transformations.
arXiv Detail & Related papers (2021-02-16T04:07:13Z) - DEF: Deep Estimation of Sharp Geometric Features in 3D Shapes [43.853000396885626]
We propose a learning-based framework for predicting sharp geometric features in sampled 3D shapes.
By fusing the result of individual patches, we can process large 3D models, which are impossible to process for existing data-driven methods.
arXiv Detail & Related papers (2020-11-30T18:21:00Z) - Deep Magnification-Flexible Upsampling over 3D Point Clouds [103.09504572409449]
We propose a novel end-to-end learning-based framework to generate dense point clouds.
We first formulate the problem explicitly, which boils down to determining the weights and high-order approximation errors.
Then, we design a lightweight neural network to adaptively learn unified and sorted weights as well as the high-order refinements.
arXiv Detail & Related papers (2020-11-25T14:00:18Z) - Robust Isometric Non-Rigid Structure-from-Motion [29.229898443263238]
Non-Rigid Structure-from-Motion (NRSfM) reconstructs a deformable 3D object from the correspondences established between monocular 2D images.
Current NRSfM methods lack statistical robustness, which is the ability to cope with correspondence errors.
We propose a three-step automatic pipeline to solve NRSfM robustly by exploiting isometry.
arXiv Detail & Related papers (2020-10-09T17:25:00Z) - Learning Local Neighboring Structure for Robust 3D Shape Representation [143.15904669246697]
Representation learning for 3D meshes is important in many computer vision and graphics applications.
We propose a local structure-aware anisotropic convolutional operation (LSA-Conv)
Our model produces significant improvement in 3D shape reconstruction compared to state-of-the-art methods.
arXiv Detail & Related papers (2020-04-21T13:40:03Z) - Segmentation and Recovery of Superquadric Models using Convolutional
Neural Networks [2.454342521577328]
We present a (two-stage) approach built around convolutional neural networks (CNNs)
In the first stage, our approach uses a Mask RCNN model to identify superquadric-like structures in depth scenes.
We are able to describe complex structures with a small number of interpretable parameters.
arXiv Detail & Related papers (2020-01-28T18:17:48Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.