Outdoor inverse rendering from a single image using multiview
self-supervision
- URL: http://arxiv.org/abs/2102.06591v1
- Date: Fri, 12 Feb 2021 16:01:18 GMT
- Title: Outdoor inverse rendering from a single image using multiview
self-supervision
- Authors: Ye Yu and William A. P. Smith
- Abstract summary: We show how to perform scene-level inverse rendering to recover shape, reflectance and lighting from a single, uncontrolled image.
The network takes an RGB image as input, regresses albedo, shadow and normal maps from which we infer least squares optimal spherical harmonic lighting.
We believe this is the first attempt to use MVS supervision for learning inverse rendering.
- Score: 36.065349509851245
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In this paper we show how to perform scene-level inverse rendering to recover
shape, reflectance and lighting from a single, uncontrolled image using a fully
convolutional neural network. The network takes an RGB image as input,
regresses albedo, shadow and normal maps from which we infer least squares
optimal spherical harmonic lighting coefficients. Our network is trained using
large uncontrolled multiview and timelapse image collections without ground
truth. By incorporating a differentiable renderer, our network can learn from
self-supervision. Since the problem is ill-posed we introduce additional
supervision. Our key insight is to perform offline multiview stereo (MVS) on
images containing rich illumination variation. From the MVS pose and depth
maps, we can cross project between overlapping views such that Siamese training
can be used to ensure consistent estimation of photometric invariants. MVS
depth also provides direct coarse supervision for normal map estimation. We
believe this is the first attempt to use MVS supervision for learning inverse
rendering. In addition, we learn a statistical natural illumination prior. We
evaluate performance on inverse rendering, normal map estimation and intrinsic
image decomposition benchmarks.
Related papers
- PS-NeRF: Neural Inverse Rendering for Multi-view Photometric Stereo [22.42916940712357]
We present a neural inverse rendering method for MVPS based on implicit representation.
Our method achieves far more accurate shape reconstruction than existing MVPS and neural rendering methods.
arXiv Detail & Related papers (2022-07-23T03:55:18Z) - RIAV-MVS: Recurrent-Indexing an Asymmetric Volume for Multi-View Stereo [20.470182157606818]
"Learning-to-optimize" paradigm iteratively indexes a plane-sweeping cost volume and regresses the depth map via a convolutional Gated Recurrent Unit (GRU)
We conduct extensive experiments on real-world MVS datasets and show that our method achieves state-of-the-art performance in terms of both within-dataset evaluation and cross-dataset generalization.
arXiv Detail & Related papers (2022-05-28T03:32:56Z) - Free-viewpoint Indoor Neural Relighting from Multi-view Stereo [5.306819482496464]
We introduce a neural relighting algorithm for captured indoors scenes, that allows interactive free-viewpoint navigation.
Our method allows illumination to be changed synthetically, while coherently rendering cast shadows and complex glossy materials.
arXiv Detail & Related papers (2021-06-24T20:09:40Z) - Stereo Matching by Self-supervision of Multiscopic Vision [65.38359887232025]
We propose a new self-supervised framework for stereo matching utilizing multiple images captured at aligned camera positions.
A cross photometric loss, an uncertainty-aware mutual-supervision loss, and a new smoothness loss are introduced to optimize the network.
Our model obtains better disparity maps than previous unsupervised methods on the KITTI dataset.
arXiv Detail & Related papers (2021-04-09T02:58:59Z) - Relighting Images in the Wild with a Self-Supervised Siamese
Auto-Encoder [62.580345486483886]
We propose a self-supervised method for image relighting of single view images in the wild.
The method is based on an auto-encoder which deconstructs an image into two separate encodings.
We train our model on large-scale datasets such as Youtube 8M and CelebA.
arXiv Detail & Related papers (2020-12-11T16:08:50Z) - Two-Stage Single Image Reflection Removal with Reflection-Aware Guidance [78.34235841168031]
We present a novel two-stage network with reflection-aware guidance (RAGNet) for single image reflection removal (SIRR)
RAG can be used (i) to mitigate the effect of reflection from the observation, and (ii) to generate mask in partial convolution for mitigating the effect of deviating from linear combination hypothesis.
Experiments on five commonly used datasets demonstrate the quantitative and qualitative superiority of our RAGNet in comparison to the state-of-the-art SIRR methods.
arXiv Detail & Related papers (2020-12-02T03:14:57Z) - A Lightweight Neural Network for Monocular View Generation with
Occlusion Handling [46.74874316127603]
We present a very lightweight neural network architecture, trained on stereo data pairs, which performs view synthesis from one single image.
The work outperforms visually and metric-wise state-of-the-art approaches on the challenging KITTI dataset.
arXiv Detail & Related papers (2020-07-24T15:29:01Z) - Single-View View Synthesis with Multiplane Images [64.46556656209769]
We apply deep learning to generate multiplane images given two or more input images at known viewpoints.
Our method learns to predict a multiplane image directly from a single image input.
It additionally generates reasonable depth maps and fills in content behind the edges of foreground objects in background layers.
arXiv Detail & Related papers (2020-04-23T17:59:19Z) - Deep 3D Capture: Geometry and Reflectance from Sparse Multi-View Images [59.906948203578544]
We introduce a novel learning-based method to reconstruct the high-quality geometry and complex, spatially-varying BRDF of an arbitrary object.
We first estimate per-view depth maps using a deep multi-view stereo network.
These depth maps are used to coarsely align the different views.
We propose a novel multi-view reflectance estimation network architecture.
arXiv Detail & Related papers (2020-03-27T21:28:54Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.