DeepPS2: Revisiting Photometric Stereo Using Two Differently Illuminated
Images
- URL: http://arxiv.org/abs/2207.02025v1
- Date: Tue, 5 Jul 2022 13:14:10 GMT
- Title: DeepPS2: Revisiting Photometric Stereo Using Two Differently Illuminated
Images
- Authors: Ashish Tiwari and Shanmuganathan Raman
- Abstract summary: Photometric stereo is a problem of recovering 3D surface normals using images of an object captured under different lightings.
We propose an inverse rendering-based deep learning framework, called DeepPS2, that jointly performs surface normal, albedo, lighting estimation, and image relighting.
- Score: 27.58399208954106
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Photometric stereo, a problem of recovering 3D surface normals using images
of an object captured under different lightings, has been of great interest and
importance in computer vision research. Despite the success of existing
traditional and deep learning-based methods, it is still challenging due to:
(i) the requirement of three or more differently illuminated images, (ii) the
inability to model unknown general reflectance, and (iii) the requirement of
accurate 3D ground truth surface normals and known lighting information for
training. In this work, we attempt to address an under-explored problem of
photometric stereo using just two differently illuminated images, referred to
as the PS2 problem. It is an intermediate case between a single image-based
reconstruction method like Shape from Shading (SfS) and the traditional
Photometric Stereo (PS), which requires three or more images. We propose an
inverse rendering-based deep learning framework, called DeepPS2, that jointly
performs surface normal, albedo, lighting estimation, and image relighting in a
completely self-supervised manner with no requirement of ground truth data. We
demonstrate how image relighting in conjunction with image reconstruction
enhances the lighting estimation in a self-supervised setting.
Related papers
- Lite2Relight: 3D-aware Single Image Portrait Relighting [87.62069509622226]
Lite2Relight is a novel technique that can predict 3D consistent head poses of portraits.
By utilizing a pre-trained geometry-aware encoder and a feature alignment module, we map input images into a relightable 3D space.
This includes producing 3D-consistent results of the full head, including hair, eyes, and expressions.
arXiv Detail & Related papers (2024-07-15T07:16:11Z) - DiFaReli: Diffusion Face Relighting [13.000032155650835]
We present a novel approach to single-view face relighting in the wild.
Handling non-diffuse effects, such as global illumination or cast shadows, has long been a challenge in face relighting.
We achieve state-of-the-art performance on standard benchmark Multi-PIE and can photorealistically relight in-the-wild images.
arXiv Detail & Related papers (2023-04-19T08:03:20Z) - Self-calibrating Photometric Stereo by Neural Inverse Rendering [88.67603644930466]
This paper tackles the task of uncalibrated photometric stereo for 3D object reconstruction.
We propose a new method that jointly optimize object shape, light directions, and light intensities.
Our method demonstrates state-of-the-art accuracy in light estimation and shape recovery on real-world datasets.
arXiv Detail & Related papers (2022-07-16T02:46:15Z) - Physically-Based Editing of Indoor Scene Lighting from a Single Image [106.60252793395104]
We present a method to edit complex indoor lighting from a single image with its predicted depth and light source segmentation masks.
We tackle this problem using two novel components: 1) a holistic scene reconstruction method that estimates scene reflectance and parametric 3D lighting, and 2) a neural rendering framework that re-renders the scene from our predictions.
arXiv Detail & Related papers (2022-05-19T06:44:37Z) - A Shading-Guided Generative Implicit Model for Shape-Accurate 3D-Aware
Image Synthesis [163.96778522283967]
We propose a shading-guided generative implicit model that is able to learn a starkly improved shape representation.
An accurate 3D shape should also yield a realistic rendering under different lighting conditions.
Our experiments on multiple datasets show that the proposed approach achieves photorealistic 3D-aware image synthesis.
arXiv Detail & Related papers (2021-10-29T10:53:12Z) - Learning Indoor Inverse Rendering with 3D Spatially-Varying Lighting [149.1673041605155]
We address the problem of jointly estimating albedo, normals, depth and 3D spatially-varying lighting from a single image.
Most existing methods formulate the task as image-to-image translation, ignoring the 3D properties of the scene.
We propose a unified, learning-based inverse framework that formulates 3D spatially-varying lighting.
arXiv Detail & Related papers (2021-09-13T15:29:03Z) - Deep Portrait Lighting Enhancement with 3D Guidance [24.01582513386902]
We present a novel deep learning framework for portrait lighting enhancement based on 3D facial guidance.
Experimental results on the FFHQ dataset and in-the-wild images show that the proposed method outperforms state-of-the-art methods in terms of both quantitative metrics and visual quality.
arXiv Detail & Related papers (2021-08-04T15:49:09Z) - Shape, Illumination, and Reflectance from Shading [86.71603503678216]
A fundamental problem in computer vision is that of inferring the intrinsic, 3D structure of the world from flat, 2D images.
We find that certain explanations are more likely than others: surfaces tend to be smooth, paint tends to be uniform, and illumination tends to be natural.
Our technique can be viewed as a superset of several classic computer vision problems.
arXiv Detail & Related papers (2020-10-07T18:14:41Z) - A CNN Based Approach for the Near-Field Photometric Stereo Problem [26.958763133729846]
We propose the first CNN based approach capable of handling realistic assumptions in Photometric Stereo.
We leverage recent improvements of deep neural networks for far-field Photometric Stereo and adapt them to near field setup.
Our method outperforms competing state-of-the-art near-field Photometric Stereo approaches on both synthetic and real experiments.
arXiv Detail & Related papers (2020-09-12T13:28:28Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.