SymmNeRF: Learning to Explore Symmetry Prior for Single-View View
Synthesis
- URL: http://arxiv.org/abs/2209.14819v1
- Date: Thu, 29 Sep 2022 14:35:07 GMT
- Title: SymmNeRF: Learning to Explore Symmetry Prior for Single-View View
Synthesis
- Authors: Xingyi Li, Chaoyi Hong, Yiran Wang, Zhiguo Cao, Ke Xian, Guosheng Lin
- Abstract summary: We study the problem of novel view synthesis of objects from a single image.
Existing methods have demonstrated the potential in single-view view synthesis.
We propose SymmNeRF, a neural radiance field (NeRF) based framework.
- Score: 66.38443539420138
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We study the problem of novel view synthesis of objects from a single image.
Existing methods have demonstrated the potential in single-view view synthesis.
However, they still fail to recover the fine appearance details, especially in
self-occluded areas. This is because a single view only provides limited
information. We observe that manmade objects usually exhibit symmetric
appearances, which introduce additional prior knowledge. Motivated by this, we
investigate the potential performance gains of explicitly embedding symmetry
into the scene representation. In this paper, we propose SymmNeRF, a neural
radiance field (NeRF) based framework that combines local and global
conditioning under the introduction of symmetry priors. In particular, SymmNeRF
takes the pixel-aligned image features and the corresponding symmetric features
as extra inputs to the NeRF, whose parameters are generated by a hypernetwork.
As the parameters are conditioned on the image-encoded latent codes, SymmNeRF
is thus scene-independent and can generalize to new scenes. Experiments on
synthetic and realworld datasets show that SymmNeRF synthesizes novel views
with more details regardless of the pose transformation, and demonstrates good
generalization when applied to unseen objects. Code is available at:
https://github.com/xingyi-li/SymmNeRF.
Related papers
- TrackNeRF: Bundle Adjusting NeRF from Sparse and Noisy Views via Feature Tracks [57.73997367306271]
TrackNeRF sets a new benchmark in noisy and sparse view reconstruction.
TrackNeRF shows significant improvements over the state-of-the-art BARF and SPARF.
arXiv Detail & Related papers (2024-08-20T11:14:23Z) - CombiNeRF: A Combination of Regularization Techniques for Few-Shot Neural Radiance Field View Synthesis [1.374796982212312]
Neural Radiance Fields (NeRFs) have shown impressive results for novel view synthesis when a sufficiently large amount of views are available.
We propose CombiNeRF, a framework that synergically combines several regularization techniques, some of them novel, in order to unify the benefits of each.
We show that CombiNeRF outperforms the state-of-the-art methods with few-shot settings in several publicly available datasets.
arXiv Detail & Related papers (2024-03-21T13:59:00Z) - Vision Transformer for NeRF-Based View Synthesis from a Single Input
Image [49.956005709863355]
We propose to leverage both the global and local features to form an expressive 3D representation.
To synthesize a novel view, we train a multilayer perceptron (MLP) network conditioned on the learned 3D representation to perform volume rendering.
Our method can render novel views from only a single input image and generalize across multiple object categories using a single model.
arXiv Detail & Related papers (2022-07-12T17:52:04Z) - SinNeRF: Training Neural Radiance Fields on Complex Scenes from a Single
Image [85.43496313628943]
We present a Single View NeRF (SinNeRF) framework consisting of thoughtfully designed semantic and geometry regularizations.
SinNeRF constructs a semi-supervised learning process, where we introduce and propagate geometry pseudo labels.
Experiments are conducted on complex scene benchmarks, including NeRF synthetic dataset, Local Light Field Fusion dataset, and DTU dataset.
arXiv Detail & Related papers (2022-04-02T19:32:42Z) - InfoNeRF: Ray Entropy Minimization for Few-Shot Neural Volume Rendering [55.70938412352287]
We present an information-theoretic regularization technique for few-shot novel view synthesis based on neural implicit representation.
The proposed approach minimizes potential reconstruction inconsistency that happens due to insufficient viewpoints.
We achieve consistently improved performance compared to existing neural view synthesis methods by large margins on multiple standard benchmarks.
arXiv Detail & Related papers (2021-12-31T11:56:01Z) - pixelNeRF: Neural Radiance Fields from One or Few Images [20.607712035278315]
pixelNeRF is a learning framework that predicts a continuous neural scene representation conditioned on one or few input images.
We conduct experiments on ShapeNet benchmarks for single image novel view synthesis tasks with held-out objects.
In all cases, pixelNeRF outperforms current state-of-the-art baselines for novel view synthesis and single image 3D reconstruction.
arXiv Detail & Related papers (2020-12-03T18:59:54Z) - Free View Synthesis [100.86844680362196]
We present a method for novel view synthesis from input images that are freely distributed around a scene.
Our method does not rely on a regular arrangement of input views, can synthesize images for free camera movement through the scene, and works for general scenes with unconstrained geometric layouts.
arXiv Detail & Related papers (2020-08-12T18:16:08Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.