Neural Projection Mapping Using Reflectance Fields
- URL: http://arxiv.org/abs/2306.06595v1
- Date: Sun, 11 Jun 2023 05:33:10 GMT
- Title: Neural Projection Mapping Using Reflectance Fields
- Authors: Yotam Erel, Daisuke Iwai and Amit H. Bermano
- Abstract summary: We introduce a projector into a neural reflectance field that allows to calibrate the projector and photo realistic light editing.
Our neural field consists of three neural networks, estimating geometry, material, and transmittance.
We believe that neural projection mapping opens up the door to novel and exciting downstream tasks, through the joint optimization of the scene and projection images.
- Score: 11.74757574153076
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We introduce a high resolution spatially adaptive light source, or a
projector, into a neural reflectance field that allows to both calibrate the
projector and photo realistic light editing. The projected texture is fully
differentiable with respect to all scene parameters, and can be optimized to
yield a desired appearance suitable for applications in augmented reality and
projection mapping. Our neural field consists of three neural networks,
estimating geometry, material, and transmittance. Using an analytical BRDF
model and carefully selected projection patterns, our acquisition process is
simple and intuitive, featuring a fixed uncalibrated projected and a handheld
camera with a co-located light source. As we demonstrate, the virtual projector
incorporated into the pipeline improves scene understanding and enables various
projection mapping applications, alleviating the need for time consuming
calibration steps performed in a traditional setting per view or projector
location. In addition to enabling novel viewpoint synthesis, we demonstrate
state-of-the-art performance projector compensation for novel viewpoints,
improvement over the baselines in material and scene reconstruction, and three
simply implemented scenarios where projection image optimization is performed,
including the use of a 2D generative model to consistently dictate scene
appearance from multiple viewpoints. We believe that neural projection mapping
opens up the door to novel and exciting downstream tasks, through the joint
optimization of the scene and projection images.
Related papers
- Incorporating dense metric depth into neural 3D representations for view synthesis and relighting [25.028859317188395]
In robotic applications, dense metric depth can often be measured directly using stereo and illumination can be controlled.
In this work we demonstrate a method to incorporate dense metric depth into the training of neural 3D representations.
We also discuss a multi-flash stereo camera system developed to capture the necessary data for our pipeline and show results on relighting and view synthesis.
arXiv Detail & Related papers (2024-09-04T20:21:13Z) - Bilateral Guided Radiance Field Processing [4.816861458037213]
Neural Radiance Fields (NeRF) achieves unprecedented performance in synthesizing novel view synthesis.
Image signal processing (ISP) in modern cameras will independently enhance them, leading to "floaters" in the reconstructed radiance fields.
We propose to disentangle the enhancement by ISP at the NeRF training stage and re-apply user-desired enhancements to the reconstructed radiance fields.
We demonstrate our approach can boost the visual quality of novel view synthesis by effectively removing floaters and performing enhancements from user retouching.
arXiv Detail & Related papers (2024-06-01T14:10:45Z) - GS-Phong: Meta-Learned 3D Gaussians for Relightable Novel View Synthesis [63.5925701087252]
We propose a novel method for representing a scene illuminated by a point light using a set of relightable 3D Gaussian points.
Inspired by the Blinn-Phong model, our approach decomposes the scene into ambient, diffuse, and specular components.
To facilitate the decomposition of geometric information independent of lighting conditions, we introduce a novel bilevel optimization-based meta-learning framework.
arXiv Detail & Related papers (2024-05-31T13:48:54Z) - NeLF-Pro: Neural Light Field Probes for Multi-Scale Novel View Synthesis [27.362216326282145]
NeLF-Pro is a novel representation to model and reconstruct light fields in diverse natural scenes.
Our central idea is to bake the scene's light field into spatially varying learnable representations.
arXiv Detail & Related papers (2023-12-20T17:18:44Z) - Spatiotemporally Consistent HDR Indoor Lighting Estimation [66.26786775252592]
We propose a physically-motivated deep learning framework to solve the indoor lighting estimation problem.
Given a single LDR image with a depth map, our method predicts spatially consistent lighting at any given image position.
Our framework achieves photorealistic lighting prediction with higher quality compared to state-of-the-art single-image or video-based methods.
arXiv Detail & Related papers (2023-05-07T20:36:29Z) - Neural 3D Reconstruction in the Wild [86.6264706256377]
We introduce a new method that enables efficient and accurate surface reconstruction from Internet photo collections.
We present a new benchmark and protocol for evaluating reconstruction performance on such in-the-wild scenes.
arXiv Detail & Related papers (2022-05-25T17:59:53Z) - Extracting Triangular 3D Models, Materials, and Lighting From Images [59.33666140713829]
We present an efficient method for joint optimization of materials and lighting from multi-view image observations.
We leverage meshes with spatially-varying materials and environment that can be deployed in any traditional graphics engine.
arXiv Detail & Related papers (2021-11-24T13:58:20Z) - Shape and Reflectance Reconstruction in Uncontrolled Environments by
Differentiable Rendering [27.41344744849205]
We propose an efficient method to reconstruct the scene's 3D geometry and reflectance from multi-view photography using conventional hand-held cameras.
Our method also shows superior performance compared to state-of-the-art alternatives in novel view visually synthesis and quantitatively.
arXiv Detail & Related papers (2021-10-25T14:09:10Z) - MVSNeRF: Fast Generalizable Radiance Field Reconstruction from
Multi-View Stereo [52.329580781898116]
We present MVSNeRF, a novel neural rendering approach that can efficiently reconstruct neural radiance fields for view synthesis.
Unlike prior works on neural radiance fields that consider per-scene optimization on densely captured images, we propose a generic deep neural network that can reconstruct radiance fields from only three nearby input views via fast network inference.
arXiv Detail & Related papers (2021-03-29T13:15:23Z) - Neural Reflectance Fields for Appearance Acquisition [61.542001266380375]
We present Neural Reflectance Fields, a novel deep scene representation that encodes volume density, normal and reflectance properties at any 3D point in a scene.
We combine this representation with a physically-based differentiable ray marching framework that can render images from a neural reflectance field under any viewpoint and light.
arXiv Detail & Related papers (2020-08-09T22:04:36Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.