Light Direction and Color Estimation from Single Image with Deep
Regression
- URL: http://arxiv.org/abs/2009.08941v1
- Date: Fri, 18 Sep 2020 17:33:49 GMT
- Title: Light Direction and Color Estimation from Single Image with Deep
Regression
- Authors: Hassan A. Sial, Ramon Baldrich, Maria Vanrell, Dimitris Samaras
- Abstract summary: We present a method to estimate the direction and color of the scene light source from a single image.
Our method is based on two main ideas: (a) we use a new synthetic dataset with strong shadow effects with similar constraints to the SID dataset; (b) we define a deep architecture trained on the mentioned dataset to estimate the direction and color of the scene light source.
- Score: 25.45529007045549
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We present a method to estimate the direction and color of the scene light
source from a single image. Our method is based on two main ideas: (a) we use a
new synthetic dataset with strong shadow effects with similar constraints to
the SID dataset; (b) we define a deep architecture trained on the mentioned
dataset to estimate the direction and color of the scene light source. Apart
from showing good performance on synthetic images, we additionally propose a
preliminary procedure to obtain light positions of the Multi-Illumination
dataset, and, in this way, we also prove that our trained model achieves good
performance when it is applied to real scenes.
Related papers
- Factored-NeuS: Reconstructing Surfaces, Illumination, and Materials of
Possibly Glossy Objects [46.04357263321969]
We develop a method that recovers the surface, materials, and illumination of a scene from its posed multi-view images.
It does not require any additional data and can handle glossy objects or bright lighting.
arXiv Detail & Related papers (2023-05-29T07:44:19Z) - TensoIR: Tensorial Inverse Rendering [51.57268311847087]
TensoIR is a novel inverse rendering approach based on tensor factorization and neural fields.
TensoRF is a state-of-the-art approach for radiance field modeling.
arXiv Detail & Related papers (2023-04-24T21:39:13Z) - NeFII: Inverse Rendering for Reflectance Decomposition with Near-Field
Indirect Illumination [48.42173911185454]
Inverse rendering methods aim to estimate geometry, materials and illumination from multi-view RGB images.
We propose an end-to-end inverse rendering pipeline that decomposes materials and illumination from multi-view images.
arXiv Detail & Related papers (2023-03-29T12:05:19Z) - Physics-based Indirect Illumination for Inverse Rendering [70.27534648770057]
We present a physics-based inverse rendering method that learns the illumination, geometry, and materials of a scene from posed multi-view RGB images.
As a side product, our physics-based inverse rendering model also facilitates flexible and realistic material editing as well as relighting.
arXiv Detail & Related papers (2022-12-09T07:33:49Z) - Designing An Illumination-Aware Network for Deep Image Relighting [69.750906769976]
We present an Illumination-Aware Network (IAN) which follows the guidance from hierarchical sampling to progressively relight a scene from a single image.
In addition, an Illumination-Aware Residual Block (IARB) is designed to approximate the physical rendering process.
Experimental results show that our proposed method produces better quantitative and qualitative relighting results than previous state-of-the-art methods.
arXiv Detail & Related papers (2022-07-21T16:21:24Z) - StyLitGAN: Prompting StyleGAN to Produce New Illumination Conditions [1.933681537640272]
We propose a novel method, StyLitGAN, for relighting and resurfacing generated images in the absence of labeled data.
Our approach generates images with realistic lighting effects, including cast shadows, soft shadows, inter-reflections, and glossy effects, without the need for paired or CGI data.
arXiv Detail & Related papers (2022-05-20T17:59:40Z) - DIB-R++: Learning to Predict Lighting and Material with a Hybrid
Differentiable Renderer [78.91753256634453]
We consider the challenging problem of predicting intrinsic object properties from a single image by exploiting differentiables.
In this work, we propose DIBR++, a hybrid differentiable which supports these effects by combining specularization and ray-tracing.
Compared to more advanced physics-based differentiables, DIBR++ is highly performant due to its compact and expressive model.
arXiv Detail & Related papers (2021-10-30T01:59:39Z) - Towards Geometry Guided Neural Relighting with Flash Photography [26.511476565209026]
We propose a framework for image relighting from a single flash photograph with its corresponding depth map using deep learning.
We experimentally validate the advantage of our geometry guided approach over state-of-the-art image-based approaches in intrinsic image decomposition and image relighting.
arXiv Detail & Related papers (2020-08-12T08:03:28Z) - Scene relighting with illumination estimation in the latent space on an
encoder-decoder scheme [68.8204255655161]
In this report we present methods that we tried to achieve that goal.
Our models are trained on a rendered dataset of artificial locations with varied scene content, light source location and color temperature.
With this dataset, we used a network with illumination estimation component aiming to infer and replace light conditions in the latent space representation of the concerned scenes.
arXiv Detail & Related papers (2020-06-03T15:25:11Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.