AutoRF: Learning 3D Object Radiance Fields from Single View Observations
- URL: http://arxiv.org/abs/2204.03593v1
- Date: Thu, 7 Apr 2022 17:13:39 GMT
- Title: AutoRF: Learning 3D Object Radiance Fields from Single View Observations
- Authors: Norman M\"uller, Andrea Simonelli, Lorenzo Porzi, Samuel Rota Bul\`o,
Matthias Nie{\ss}ner, Peter Kontschieder
- Abstract summary: AutoRF is a new approach for learning neural 3D object representations where each object in the training set is observed by only a single view.
We show that our method generalizes well to unseen objects, even across different datasets of challenging real-world street scenes.
- Score: 17.289819674602295
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: We introduce AutoRF - a new approach for learning neural 3D object
representations where each object in the training set is observed by only a
single view. This setting is in stark contrast to the majority of existing
works that leverage multiple views of the same object, employ explicit priors
during training, or require pixel-perfect annotations. To address this
challenging setting, we propose to learn a normalized, object-centric
representation whose embedding describes and disentangles shape, appearance,
and pose. Each encoding provides well-generalizable, compact information about
the object of interest, which is decoded in a single-shot into a new target
view, thus enabling novel view synthesis. We further improve the reconstruction
quality by optimizing shape and appearance codes at test time by fitting the
representation tightly to the input image. In a series of experiments, we show
that our method generalizes well to unseen objects, even across different
datasets of challenging real-world street scenes such as nuScenes, KITTI, and
Mapillary Metropolis.
Related papers
- Zero-Shot Object-Centric Representation Learning [72.43369950684057]
We study current object-centric methods through the lens of zero-shot generalization.
We introduce a benchmark comprising eight different synthetic and real-world datasets.
We find that training on diverse real-world images improves transferability to unseen scenarios.
arXiv Detail & Related papers (2024-08-17T10:37:07Z) - UpFusion: Novel View Diffusion from Unposed Sparse View Observations [66.36092764694502]
UpFusion can perform novel view synthesis and infer 3D representations for an object given a sparse set of reference images.
We show that this mechanism allows generating high-fidelity novel views while improving the synthesis quality given additional (unposed) images.
arXiv Detail & Related papers (2023-12-11T18:59:55Z) - Variational Inference for Scalable 3D Object-centric Learning [19.445804699433353]
We tackle the task of scalable unsupervised object-centric representation learning on 3D scenes.
Existing approaches to object-centric representation learning show limitations in generalizing to larger scenes.
We propose to learn view-invariant 3D object representations in localized object coordinate systems.
arXiv Detail & Related papers (2023-09-25T10:23:40Z) - MegaPose: 6D Pose Estimation of Novel Objects via Render & Compare [84.80956484848505]
MegaPose is a method to estimate the 6D pose of novel objects, that is, objects unseen during training.
We present a 6D pose refiner based on a render&compare strategy which can be applied to novel objects.
Second, we introduce a novel approach for coarse pose estimation which leverages a network trained to classify whether the pose error between a synthetic rendering and an observed image of the same object can be corrected by the refiner.
arXiv Detail & Related papers (2022-12-13T19:30:03Z) - Vision Transformer for NeRF-Based View Synthesis from a Single Input
Image [49.956005709863355]
We propose to leverage both the global and local features to form an expressive 3D representation.
To synthesize a novel view, we train a multilayer perceptron (MLP) network conditioned on the learned 3D representation to perform volume rendering.
Our method can render novel views from only a single input image and generalize across multiple object categories using a single model.
arXiv Detail & Related papers (2022-07-12T17:52:04Z) - Object Scene Representation Transformer [56.40544849442227]
We introduce Object Scene Representation Transformer (OSRT), a 3D-centric model in which individual object representations naturally emerge through novel view synthesis.
OSRT scales to significantly more complex scenes with larger diversity of objects and backgrounds than existing methods.
It is multiple orders of magnitude faster at compositional rendering thanks to its light field parametrization and the novel Slot Mixer decoder.
arXiv Detail & Related papers (2022-06-14T15:40:47Z) - LOLNeRF: Learn from One Look [22.771493686755544]
We present a method for learning a generative 3D model based on neural radiance fields.
We show that, unlike existing methods, one does not need multi-view data to achieve this goal.
arXiv Detail & Related papers (2021-11-19T01:20:01Z) - Learning Object-Centric Representations of Multi-Object Scenes from
Multiple Views [9.556376932449187]
Multi-View and Multi-Object Network (MulMON) is a method for learning accurate, object-centric representations of multi-object scenes by leveraging multiple views.
We show that MulMON better-resolves spatial ambiguities than single-view methods.
arXiv Detail & Related papers (2021-11-13T13:54:28Z) - Weakly Supervised Learning of Multi-Object 3D Scene Decompositions Using
Deep Shape Priors [69.02332607843569]
PriSMONet is a novel approach for learning Multi-Object 3D scene decomposition and representations from single images.
A recurrent encoder regresses a latent representation of 3D shape, pose and texture of each object from an input RGB image.
We evaluate the accuracy of our model in inferring 3D scene layout, demonstrate its generative capabilities, assess its generalization to real images, and point out benefits of the learned representation.
arXiv Detail & Related papers (2020-10-08T14:49:23Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.