Universal Photometric Stereo Network using Global Lighting Contexts
- URL: http://arxiv.org/abs/2206.02452v1
- Date: Mon, 6 Jun 2022 09:32:06 GMT
- Title: Universal Photometric Stereo Network using Global Lighting Contexts
- Authors: Satoshi Ikehata
- Abstract summary: This paper tackles a new photometric stereo task, named universal photometric stereo.
It is supposed to work for objects with diverse shapes and materials under arbitrary lighting variations without assuming any specific models.
- Score: 4.822598110892846
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: This paper tackles a new photometric stereo task, named universal photometric
stereo. Unlike existing tasks that assumed specific physical lighting models;
hence, drastically limited their usability, a solution algorithm of this task
is supposed to work for objects with diverse shapes and materials under
arbitrary lighting variations without assuming any specific models. To solve
this extremely challenging task, we present a purely data-driven method, which
eliminates the prior assumption of lighting by replacing the recovery of
physical lighting parameters with the extraction of the generic lighting
representation, named global lighting contexts. We use them like lighting
parameters in a calibrated photometric stereo network to recover surface normal
vectors pixelwisely. To adapt our network to a wide variety of shapes,
materials and lightings, it is trained on a new synthetic dataset which
simulates the appearance of objects in the wild. Our method is compared with
other state-of-the-art uncalibrated photometric stereo methods on our test data
to demonstrate the significance of our method.
Related papers
- LIPIDS: Learning-based Illumination Planning In Discretized (Light) Space for Photometric Stereo [19.021200954913475]
Photometric stereo is a powerful method for obtaining per-pixel surface normals from differently illuminated images of an object.
Finding an optimal configuration is challenging due to the vast number of possible lighting directions.
We introduce LIPIDS - Learning-based Illumination Planning In Discretized light Space.
arXiv Detail & Related papers (2024-09-01T09:54:16Z) - MERLiN: Single-Shot Material Estimation and Relighting for Photometric Stereo [26.032964551717548]
Photometric stereo typically demands intricate data acquisition setups involving multiple light sources to recover surface normals accurately.
We propose MERLiN, an attention-based hourglass network that integrates single image-based inverse rendering and relighting within a single unified framework.
arXiv Detail & Related papers (2024-09-01T09:32:03Z) - Deep Learning Methods for Calibrated Photometric Stereo and Beyond [86.57469194387264]
Photometric stereo recovers the surface normals of an object from multiple images with varying shading cues.
Deep learning methods have shown a powerful ability in the context of photometric stereo against non-Lambertian surfaces.
arXiv Detail & Related papers (2022-12-16T11:27:44Z) - A CNN Based Approach for the Point-Light Photometric Stereo Problem [26.958763133729846]
We propose a CNN-based approach capable of handling realistic assumptions by leveraging recent improvements of deep neural networks for far-field Photometric Stereo.
Our approach outperforms the state-of-the-art on the DiLiGenT real world dataset.
In order to measure the performance of our approach for near-field point-light source PS data, we introduce LUCES the first real-world 'dataset for near-fieLd point light soUrCe photomEtric Stereo'
arXiv Detail & Related papers (2022-10-10T12:57:12Z) - Self-calibrating Photometric Stereo by Neural Inverse Rendering [88.67603644930466]
This paper tackles the task of uncalibrated photometric stereo for 3D object reconstruction.
We propose a new method that jointly optimize object shape, light directions, and light intensities.
Our method demonstrates state-of-the-art accuracy in light estimation and shape recovery on real-world datasets.
arXiv Detail & Related papers (2022-07-16T02:46:15Z) - LUCES: A Dataset for Near-Field Point Light Source Photometric Stereo [30.31403197697561]
We introduce LUCES, the first real-world 'dataset for near-fieLd point light soUrCe photomEtric Stereo' of 14 objects of a varying of materials.
A device counting 52 LEDs has been designed to lit each object positioned 10 to 30 centimeters away from the camera.
We evaluate the performance of the latest near-field Photometric Stereo algorithms on the proposed dataset.
arXiv Detail & Related papers (2021-04-27T12:30:42Z) - Leveraging Spatial and Photometric Context for Calibrated Non-Lambertian
Photometric Stereo [61.6260594326246]
We introduce an efficient fully-convolutional architecture that can leverage both spatial and photometric context simultaneously.
Using separable 4D convolutions and 2D heat-maps reduces the size and makes more efficient.
arXiv Detail & Related papers (2021-03-22T18:06:58Z) - GMLight: Lighting Estimation via Geometric Distribution Approximation [86.95367898017358]
This paper presents a lighting estimation framework that employs a regression network and a generative projector for effective illumination estimation.
We parameterize illumination scenes in terms of the geometric light distribution, light intensity, ambient term, and auxiliary depth, and estimate them as a pure regression task.
With the estimated lighting parameters, the generative projector synthesizes panoramic illumination maps with realistic appearance and frequency.
arXiv Detail & Related papers (2021-02-20T03:31:52Z) - Uncalibrated Neural Inverse Rendering for Photometric Stereo of General
Surfaces [103.08512487830669]
This paper presents an uncalibrated deep neural network framework for the photometric stereo problem.
Existing neural network-based methods either require exact light directions or ground-truth surface normals of the object or both.
We propose an uncalibrated neural inverse rendering approach to this problem.
arXiv Detail & Related papers (2020-12-12T10:33:08Z) - Light Stage Super-Resolution: Continuous High-Frequency Relighting [58.09243542908402]
We propose a learning-based solution for the "super-resolution" of scans of human faces taken from a light stage.
Our method aggregates the captured images corresponding to neighboring lights in the stage, and uses a neural network to synthesize a rendering of the face.
Our learned model is able to produce renderings for arbitrary light directions that exhibit realistic shadows and specular highlights.
arXiv Detail & Related papers (2020-10-17T23:40:43Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.