DINER: Depth-aware Image-based NEural Radiance fields
- URL: http://arxiv.org/abs/2211.16630v2
- Date: Thu, 30 Mar 2023 22:14:20 GMT
- Title: DINER: Depth-aware Image-based NEural Radiance fields
- Authors: Malte Prinzler, Otmar Hilliges, Justus Thies
- Abstract summary: We present Depth-aware Image-based NEural Radiance fields (DINER)
Given a sparse set of RGB input views, we predict depth and feature maps to guide the reconstruction of a scene representation.
We propose novel techniques to incorporate depth information into feature fusion and efficient scene sampling.
- Score: 45.63488428831042
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We present Depth-aware Image-based NEural Radiance fields (DINER). Given a
sparse set of RGB input views, we predict depth and feature maps to guide the
reconstruction of a volumetric scene representation that allows us to render 3D
objects under novel views. Specifically, we propose novel techniques to
incorporate depth information into feature fusion and efficient scene sampling.
In comparison to the previous state of the art, DINER achieves higher synthesis
quality and can process input views with greater disparity. This allows us to
capture scenes more completely without changing capturing hardware requirements
and ultimately enables larger viewpoint changes during novel view synthesis. We
evaluate our method by synthesizing novel views, both for human heads and for
general objects, and observe significantly improved qualitative results and
increased perceptual metrics compared to the previous state of the art. The
code is publicly available for research purposes.
Related papers
- Efficient Depth-Guided Urban View Synthesis [52.841803876653465]
We introduce Efficient Depth-Guided Urban View Synthesis (EDUS) for fast feed-forward inference and efficient per-scene fine-tuning.
EDUS exploits noisy predicted geometric priors as guidance to enable generalizable urban view synthesis from sparse input images.
Our results indicate that EDUS achieves state-of-the-art performance in sparse view settings when combined with fast test-time optimization.
arXiv Detail & Related papers (2024-07-17T08:16:25Z) - SceneRF: Self-Supervised Monocular 3D Scene Reconstruction with Radiance
Fields [19.740018132105757]
SceneRF is a self-supervised monocular scene reconstruction method using only posed image sequences for training.
At inference, a single input image suffices to hallucinate novel depth views which are fused together to obtain 3D scene reconstruction.
arXiv Detail & Related papers (2022-12-05T18:59:57Z) - Designing An Illumination-Aware Network for Deep Image Relighting [69.750906769976]
We present an Illumination-Aware Network (IAN) which follows the guidance from hierarchical sampling to progressively relight a scene from a single image.
In addition, an Illumination-Aware Residual Block (IARB) is designed to approximate the physical rendering process.
Experimental results show that our proposed method produces better quantitative and qualitative relighting results than previous state-of-the-art methods.
arXiv Detail & Related papers (2022-07-21T16:21:24Z) - Remote Sensing Novel View Synthesis with Implicit Multiplane
Representations [26.33490094119609]
We propose a novel remote sensing view synthesis method by leveraging the recent advances in implicit neural representations.
Considering the overhead and far depth imaging of remote sensing images, we represent the 3D space by combining implicit multiplane images (MPI) representation and deep neural networks.
Images from any novel views can be freely rendered on the basis of the reconstructed model.
arXiv Detail & Related papers (2022-05-18T13:03:55Z) - Dense Depth Priors for Neural Radiance Fields from Sparse Input Views [37.92064060160628]
We propose a method to synthesize novel views of whole rooms from an order of magnitude fewer images.
Our method enables data-efficient novel view synthesis on challenging indoor scenes, using as few as 18 images for an entire scene.
arXiv Detail & Related papers (2021-12-06T19:00:02Z) - NeMI: Unifying Neural Radiance Fields with Multiplane Images for Novel
View Synthesis [69.19261797333635]
We propose an approach to perform novel view synthesis and depth estimation via dense 3D reconstruction from a single image.
Our NeMI unifies Neural radiance fields (NeRF) with Multiplane Images (MPI)
We also achieve competitive results in depth estimation on iBims-1 and NYU-v2 without annotated depth supervision.
arXiv Detail & Related papers (2021-03-27T13:41:00Z) - Semantic View Synthesis [56.47999473206778]
We tackle a new problem of semantic view synthesis -- generating free-viewpoint rendering of a synthesized scene using a semantic label map as input.
First, we focus on synthesizing the color and depth of the visible surface of the 3D scene.
We then use the synthesized color and depth to impose explicit constraints on the multiple-plane image (MPI) representation prediction process.
arXiv Detail & Related papers (2020-08-24T17:59:46Z) - NeRF: Representing Scenes as Neural Radiance Fields for View Synthesis [78.5281048849446]
We present a method that achieves state-of-the-art results for synthesizing novel views of complex scenes.
Our algorithm represents a scene using a fully-connected (non-convolutional) deep network.
Because volume rendering is naturally differentiable, the only input required to optimize our representation is a set of images with known camera poses.
arXiv Detail & Related papers (2020-03-19T17:57:23Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.