PX-NET: Simple and Efficient Pixel-Wise Training of Photometric Stereo
Networks
- URL: http://arxiv.org/abs/2008.04933v3
- Date: Tue, 12 Oct 2021 12:55:11 GMT
- Title: PX-NET: Simple and Efficient Pixel-Wise Training of Photometric Stereo
Networks
- Authors: Fotios Logothetis, Ignas Budvytis, Roberto Mecca, Roberto Cipolla
- Abstract summary: Retrieving accurate 3D reconstructions of objects from the way they reflect light is a very challenging task in computer vision.
We propose a novel pixel-wise training procedure for normal prediction by replacing the training data (observation maps) of globally rendered images with independent per-pixel generated data.
Our network, PX-NET, achieves the state-of-the-art performance compared to other pixelwise methods on synthetic datasets.
- Score: 26.958763133729846
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Retrieving accurate 3D reconstructions of objects from the way they reflect
light is a very challenging task in computer vision. Despite more than four
decades since the definition of the Photometric Stereo problem, most of the
literature has had limited success when global illumination effects such as
cast shadows, self-reflections and ambient light come into play, especially for
specular surfaces.
Recent approaches have leveraged the power of deep learning in conjunction
with computer graphics in order to cope with the need of a vast number of
training data in order to invert the image irradiance equation and retrieve the
geometry of the object. However, rendering global illumination effects is a
slow process which can limit the amount of training data that can be generated.
In this work we propose a novel pixel-wise training procedure for normal
prediction by replacing the training data (observation maps) of globally
rendered images with independent per-pixel generated data. We show that global
physical effects can be approximated on the observation map domain and this
simplifies and speeds up the data creation procedure.
Our network, PX-NET, achieves the state-of-the-art performance compared to
other pixelwise methods on synthetic datasets, as well as the Diligent real
dataset on both dense and sparse light settings.
Related papers
- A CNN Based Approach for the Point-Light Photometric Stereo Problem [26.958763133729846]
We propose a CNN-based approach capable of handling realistic assumptions by leveraging recent improvements of deep neural networks for far-field Photometric Stereo.
Our approach outperforms the state-of-the-art on the DiLiGenT real world dataset.
In order to measure the performance of our approach for near-field point-light source PS data, we introduce LUCES the first real-world 'dataset for near-fieLd point light soUrCe photomEtric Stereo'
arXiv Detail & Related papers (2022-10-10T12:57:12Z) - Learning to Relight Portrait Images via a Virtual Light Stage and
Synthetic-to-Real Adaptation [76.96499178502759]
Relighting aims to re-illuminate the person in the image as if the person appeared in an environment with the target lighting.
Recent methods rely on deep learning to achieve high-quality results.
We propose a new approach that can perform on par with the state-of-the-art (SOTA) relighting methods without requiring a light stage.
arXiv Detail & Related papers (2022-09-21T17:15:58Z) - Neural Radiance Transfer Fields for Relightable Novel-view Synthesis
with Global Illumination [63.992213016011235]
We propose a method for scene relighting under novel views by learning a neural precomputed radiance transfer function.
Our method can be solely supervised on a set of real images of the scene under a single unknown lighting condition.
Results show that the recovered disentanglement of scene parameters improves significantly over the current state of the art.
arXiv Detail & Related papers (2022-07-27T16:07:48Z) - Progressively-connected Light Field Network for Efficient View Synthesis [69.29043048775802]
We present a Progressively-connected Light Field network (ProLiF) for the novel view synthesis of complex forward-facing scenes.
ProLiF encodes a 4D light field, which allows rendering a large batch of rays in one training step for image- or patch-level losses.
arXiv Detail & Related papers (2022-07-10T13:47:20Z) - Enhancing Low-Light Images in Real World via Cross-Image Disentanglement [58.754943762945864]
We propose a new low-light image enhancement dataset consisting of misaligned training images with real-world corruptions.
Our model achieves state-of-the-art performances on both the newly proposed dataset and other popular low-light datasets.
arXiv Detail & Related papers (2022-01-10T03:12:52Z) - LUCES: A Dataset for Near-Field Point Light Source Photometric Stereo [30.31403197697561]
We introduce LUCES, the first real-world 'dataset for near-fieLd point light soUrCe photomEtric Stereo' of 14 objects of a varying of materials.
A device counting 52 LEDs has been designed to lit each object positioned 10 to 30 centimeters away from the camera.
We evaluate the performance of the latest near-field Photometric Stereo algorithms on the proposed dataset.
arXiv Detail & Related papers (2021-04-27T12:30:42Z) - Spatially-Varying Outdoor Lighting Estimation from Intrinsics [66.04683041837784]
We present SOLID-Net, a neural network for spatially-varying outdoor lighting estimation.
We generate spatially-varying local lighting environment maps by combining global sky environment map with warped image information.
Experiments on both synthetic and real datasets show that SOLID-Net significantly outperforms previous methods.
arXiv Detail & Related papers (2021-04-09T02:28:54Z) - Intrinsic Autoencoders for Joint Neural Rendering and Intrinsic Image
Decomposition [67.9464567157846]
We propose an autoencoder for joint generation of realistic images from synthetic 3D models while simultaneously decomposing real images into their intrinsic shape and appearance properties.
Our experiments confirm that a joint treatment of rendering and decomposition is indeed beneficial and that our approach outperforms state-of-the-art image-to-image translation baselines both qualitatively and quantitatively.
arXiv Detail & Related papers (2020-06-29T12:53:58Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.