LASR: Learning Articulated Shape Reconstruction from a Monocular Video
- URL: http://arxiv.org/abs/2105.02976v1
- Date: Thu, 6 May 2021 21:41:11 GMT
- Title: LASR: Learning Articulated Shape Reconstruction from a Monocular Video
- Authors: Gengshan Yang, Deqing Sun, Varun Jampani, Daniel Vlasic, Forrester
Cole, Huiwen Chang, Deva Ramanan, William T. Freeman, Ce Liu
- Abstract summary: We introduce a template-free approach to learn 3D shapes from a single video.
Our method faithfully reconstructs nonrigid 3D structures from videos of human, animals, and objects of unknown classes.
- Score: 97.92849567637819
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Remarkable progress has been made in 3D reconstruction of rigid structures
from a video or a collection of images. However, it is still challenging to
reconstruct nonrigid structures from RGB inputs, due to its under-constrained
nature. While template-based approaches, such as parametric shape models, have
achieved great success in modeling the "closed world" of known object
categories, they cannot well handle the "open-world" of novel object categories
or outlier shapes. In this work, we introduce a template-free approach to learn
3D shapes from a single video. It adopts an analysis-by-synthesis strategy that
forward-renders object silhouette, optical flow, and pixel values to compare
with video observations, which generates gradients to adjust the camera, shape
and motion parameters. Without using a category-specific shape template, our
method faithfully reconstructs nonrigid 3D structures from videos of human,
animals, and objects of unknown classes. Code will be available at
lasr-google.github.io .
Related papers
- CAMM: Building Category-Agnostic and Animatable 3D Models from Monocular
Videos [3.356334042188362]
We propose a novel reconstruction method that learns an animatable kinematic chain for any articulated object.
Our approach is on par with state-of-the-art 3D surface reconstruction methods on various articulated object categories.
arXiv Detail & Related papers (2023-04-14T06:07:54Z) - 3D Surface Reconstruction in the Wild by Deforming Shape Priors from
Synthetic Data [24.97027425606138]
Reconstructing the underlying 3D surface of an object from a single image is a challenging problem.
We present a new method for joint category-specific 3D reconstruction and object pose estimation from a single image.
Our approach achieves state-of-the-art reconstruction performance across several real-world datasets.
arXiv Detail & Related papers (2023-02-24T20:37:27Z) - Disentangled3D: Learning a 3D Generative Model with Disentangled
Geometry and Appearance from Monocular Images [94.49117671450531]
State-of-the-art 3D generative models are GANs which use neural 3D volumetric representations for synthesis.
In this paper, we design a 3D GAN which can learn a disentangled model of objects, just from monocular observations.
arXiv Detail & Related papers (2022-03-29T22:03:18Z) - Pop-Out Motion: 3D-Aware Image Deformation via Learning the Shape
Laplacian [58.704089101826774]
We present a 3D-aware image deformation method with minimal restrictions on shape category and deformation type.
We take a supervised learning-based approach to predict the shape Laplacian of the underlying volume of a 3D reconstruction represented as a point cloud.
In the experiments, we present our results of deforming 2D character and clothed human images.
arXiv Detail & Related papers (2022-03-29T04:57:18Z) - DOVE: Learning Deformable 3D Objects by Watching Videos [89.43105063468077]
We present DOVE, which learns to predict 3D canonical shape, deformation, viewpoint and texture from a single 2D image of a bird.
Our method reconstructs temporally consistent 3D shape and deformation, which allows us to animate and re-render the bird from arbitrary viewpoints.
arXiv Detail & Related papers (2021-07-22T17:58:10Z) - Learning monocular 3D reconstruction of articulated categories from
motion [39.811816510186475]
Video self-supervision forces the consistency of consecutive 3D reconstructions by a motion-based cycle loss.
We introduce an interpretable model of 3D template deformations that controls a 3D surface through the displacement of a small number of local, learnable handles.
We obtain state-of-the-art reconstructions with diverse shapes, viewpoints and textures for multiple articulated object categories.
arXiv Detail & Related papers (2021-03-30T13:50:27Z) - Online Adaptation for Consistent Mesh Reconstruction in the Wild [147.22708151409765]
We pose video-based reconstruction as a self-supervised online adaptation problem applied to any incoming test video.
We demonstrate that our algorithm recovers temporally consistent and reliable 3D structures from videos of non-rigid objects including those of animals captured in the wild.
arXiv Detail & Related papers (2020-12-06T07:22:27Z) - Canonical 3D Deformer Maps: Unifying parametric and non-parametric
methods for dense weakly-supervised category reconstruction [79.98689027127855]
We propose a new representation of the 3D shape of common object categories that can be learned from a collection of 2D images of independent objects.
Our method builds in a novel way on concepts from parametric deformation models, non-parametric 3D reconstruction, and canonical embeddings.
It achieves state-of-the-art results in dense 3D reconstruction on public in-the-wild datasets of faces, cars, and birds.
arXiv Detail & Related papers (2020-08-28T15:44:05Z) - Single-View 3D Object Reconstruction from Shape Priors in Memory [15.641803721287628]
Existing methods for single-view 3D object reconstruction do not contain enough information to reconstruct high-quality 3D shapes.
We propose a novel method, named Mem3D, that explicitly constructs shape priors to supplement the missing information in the image.
We also propose a voxel triplet loss function that helps to retrieve the precise 3D shapes that are highly related to the input image from shape priors.
arXiv Detail & Related papers (2020-03-08T03:51:07Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.