DOVE: Learning Deformable 3D Objects by Watching Videos
- URL: http://arxiv.org/abs/2107.10844v1
- Date: Thu, 22 Jul 2021 17:58:10 GMT
- Title: DOVE: Learning Deformable 3D Objects by Watching Videos
- Authors: Shangzhe Wu, Tomas Jakab, Christian Rupprecht, Andrea Vedaldi
- Abstract summary: We present DOVE, which learns to predict 3D canonical shape, deformation, viewpoint and texture from a single 2D image of a bird.
Our method reconstructs temporally consistent 3D shape and deformation, which allows us to animate and re-render the bird from arbitrary viewpoints.
- Score: 89.43105063468077
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Learning deformable 3D objects from 2D images is an extremely ill-posed
problem. Existing methods rely on explicit supervision to establish multi-view
correspondences, such as template shape models and keypoint annotations, which
restricts their applicability on objects "in the wild". In this paper, we
propose to use monocular videos, which naturally provide correspondences across
time, allowing us to learn 3D shapes of deformable object categories without
explicit keypoints or template shapes. Specifically, we present DOVE, which
learns to predict 3D canonical shape, deformation, viewpoint and texture from a
single 2D image of a bird, given a bird video collection as well as
automatically obtained silhouettes and optical flows as training data. Our
method reconstructs temporally consistent 3D shape and deformation, which
allows us to animate and re-render the bird from arbitrary viewpoints from a
single image.
Related papers
- Ponymation: Learning Articulated 3D Animal Motions from Unlabeled Online Videos [47.97168047776216]
We introduce a new method for learning a generative model of articulated 3D animal motions from raw, unlabeled online videos.
Our model learns purely from a collection of unlabeled web video clips, leveraging semantic correspondences distilled from self-supervised image features.
arXiv Detail & Related papers (2023-12-21T06:44:18Z) - Pop-Out Motion: 3D-Aware Image Deformation via Learning the Shape
Laplacian [58.704089101826774]
We present a 3D-aware image deformation method with minimal restrictions on shape category and deformation type.
We take a supervised learning-based approach to predict the shape Laplacian of the underlying volume of a 3D reconstruction represented as a point cloud.
In the experiments, we present our results of deforming 2D character and clothed human images.
arXiv Detail & Related papers (2022-03-29T04:57:18Z) - Multi-Category Mesh Reconstruction From Image Collections [90.24365811344987]
We present an alternative approach that infers the textured mesh of objects combining a series of deformable 3D models and a set of instance-specific deformation, pose, and texture.
Our method is trained with images of multiple object categories using only foreground masks and rough camera poses as supervision.
Experiments show that the proposed framework can distinguish between different object categories and learn category-specific shape priors in an unsupervised manner.
arXiv Detail & Related papers (2021-10-21T16:32:31Z) - Learning Canonical 3D Object Representation for Fine-Grained Recognition [77.33501114409036]
We propose a novel framework for fine-grained object recognition that learns to recover object variation in 3D space from a single image.
We represent an object as a composition of 3D shape and its appearance, while eliminating the effect of camera viewpoint.
By incorporating 3D shape and appearance jointly in a deep representation, our method learns the discriminative representation of the object.
arXiv Detail & Related papers (2021-08-10T12:19:34Z) - Do 2D GANs Know 3D Shape? Unsupervised 3D shape reconstruction from 2D
Image GANs [156.1209884183522]
State-of-the-art 2D generative models like GANs show unprecedented quality in modeling the natural image manifold.
We present the first attempt to directly mine 3D geometric cues from an off-the-shelf 2D GAN that is trained on RGB images only.
arXiv Detail & Related papers (2020-11-02T09:38:43Z) - Unsupervised object-centric video generation and decomposition in 3D [36.08064849807464]
We propose to model a video as the view seen while moving through a scene with multiple 3D objects and a 3D background.
Our model is trained from monocular videos without any supervision, yet learns to generate coherent 3D scenes containing several moving objects.
arXiv Detail & Related papers (2020-07-07T18:01:29Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.