Self-supervised Neural Articulated Shape and Appearance Models
- URL: http://arxiv.org/abs/2205.08525v1
- Date: Tue, 17 May 2022 17:50:47 GMT
- Title: Self-supervised Neural Articulated Shape and Appearance Models
- Authors: Fangyin Wei, Rohan Chabra, Lingni Ma, Christoph Lassner, Michael
Zollh\"ofer, Szymon Rusinkiewicz, Chris Sweeney, Richard Newcombe, Mira
Slavcheva
- Abstract summary: We propose a novel approach for learning a representation of the geometry, appearance, and motion of a class of articulated objects.
Our representation learns shape, appearance, and articulation codes that enable independent control of these semantic dimensions.
- Score: 18.99030452836038
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Learning geometry, motion, and appearance priors of object classes is
important for the solution of a large variety of computer vision problems.
While the majority of approaches has focused on static objects, dynamic
objects, especially with controllable articulation, are less explored. We
propose a novel approach for learning a representation of the geometry,
appearance, and motion of a class of articulated objects given only a set of
color images as input. In a self-supervised manner, our novel representation
learns shape, appearance, and articulation codes that enable independent
control of these semantic dimensions. Our model is trained end-to-end without
requiring any articulation annotations. Experiments show that our approach
performs well for different joint types, such as revolute and prismatic joints,
as well as different combinations of these joints. Compared to state of the art
that uses direct 3D supervision and does not output appearance, we recover more
faithful geometry and appearance from 2D observations only. In addition, our
representation enables a large variety of applications, such as few-shot
reconstruction, the generation of novel articulations, and novel
view-synthesis.
Related papers
- REACTO: Reconstructing Articulated Objects from a Single Video [64.89760223391573]
We propose a novel deformation model that enhances the rigidity of each part while maintaining flexible deformation of the joints.
Our method outperforms previous works in producing higher-fidelity 3D reconstructions of general articulated objects.
arXiv Detail & Related papers (2024-04-17T08:01:55Z) - Learning Part Motion of Articulated Objects Using Spatially Continuous
Neural Implicit Representations [8.130629735939895]
We introduce a novel framework that disentangles the part motion of articulated objects by predicting the transformation matrix of points on the part surface.
Our proposed framework is generic to different kinds of joint motions in that the transformation matrix can model diverse kinds of joint motions in the space.
arXiv Detail & Related papers (2023-11-21T07:54:40Z) - GM-NeRF: Learning Generalizable Model-based Neural Radiance Fields from
Multi-view Images [79.39247661907397]
We introduce an effective framework Generalizable Model-based Neural Radiance Fields to synthesize free-viewpoint images.
Specifically, we propose a geometry-guided attention mechanism to register the appearance code from multi-view 2D images to a geometry proxy.
arXiv Detail & Related papers (2023-03-24T03:32:02Z) - Generalizable Neural Performer: Learning Robust Radiance Fields for
Human Novel View Synthesis [52.720314035084215]
This work targets at using a general deep learning framework to synthesize free-viewpoint images of arbitrary human performers.
We present a simple yet powerful framework, named Generalizable Neural Performer (GNR), that learns a generalizable and robust neural body representation.
Experiments on GeneBody-1.0 and ZJU-Mocap show better robustness of our methods than recent state-of-the-art generalizable methods.
arXiv Detail & Related papers (2022-04-25T17:14:22Z) - AutoRF: Learning 3D Object Radiance Fields from Single View Observations [17.289819674602295]
AutoRF is a new approach for learning neural 3D object representations where each object in the training set is observed by only a single view.
We show that our method generalizes well to unseen objects, even across different datasets of challenging real-world street scenes.
arXiv Detail & Related papers (2022-04-07T17:13:39Z) - Watch It Move: Unsupervised Discovery of 3D Joints for Re-Posing of
Articulated Objects [73.23249640099516]
We learn both the appearance and the structure of previously unseen articulated objects by observing them move from multiple views.
Our insight is that adjacent parts that move relative to each other must be connected by a joint.
We show that our method works for different structures, from quadrupeds, to single-arm robots, to humans.
arXiv Detail & Related papers (2021-12-21T16:37:48Z) - Neural Actor: Neural Free-view Synthesis of Human Actors with Pose
Control [80.79820002330457]
We propose a new method for high-quality synthesis of humans from arbitrary viewpoints and under arbitrary controllable poses.
Our method achieves better quality than the state-of-the-arts on playback as well as novel pose synthesis, and can even generalize well to new poses that starkly differ from the training poses.
arXiv Detail & Related papers (2021-06-03T17:40:48Z) - Hierarchical Relational Inference [80.00374471991246]
We propose a novel approach to physical reasoning that models objects as hierarchies of parts that may locally behave separately, but also act more globally as a single whole.
Unlike prior approaches, our method learns in an unsupervised fashion directly from raw visual images.
It explicitly distinguishes multiple levels of abstraction and improves over a strong baseline at modeling synthetic and real-world videos.
arXiv Detail & Related papers (2020-10-07T20:19:10Z) - Bowtie Networks: Generative Modeling for Joint Few-Shot Recognition and
Novel-View Synthesis [39.53519330457627]
We propose a novel task of joint few-shot recognition and novel-view synthesis.
We aim to simultaneously learn an object classifier and generate images of that type of object from new viewpoints.
We focus on the interaction and cooperation between a generative model and a discriminative model.
arXiv Detail & Related papers (2020-08-16T19:40:56Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.