Light Stage Super-Resolution: Continuous High-Frequency Relighting
- URL: http://arxiv.org/abs/2010.08888v1
- Date: Sat, 17 Oct 2020 23:40:43 GMT
- Title: Light Stage Super-Resolution: Continuous High-Frequency Relighting
- Authors: Tiancheng Sun, Zexiang Xu, Xiuming Zhang, Sean Fanello, Christoph
Rhemann, Paul Debevec, Yun-Ta Tsai, Jonathan T. Barron, Ravi Ramamoorthi
- Abstract summary: We propose a learning-based solution for the "super-resolution" of scans of human faces taken from a light stage.
Our method aggregates the captured images corresponding to neighboring lights in the stage, and uses a neural network to synthesize a rendering of the face.
Our learned model is able to produce renderings for arbitrary light directions that exhibit realistic shadows and specular highlights.
- Score: 58.09243542908402
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The light stage has been widely used in computer graphics for the past two
decades, primarily to enable the relighting of human faces. By capturing the
appearance of the human subject under different light sources, one obtains the
light transport matrix of that subject, which enables image-based relighting in
novel environments. However, due to the finite number of lights in the stage,
the light transport matrix only represents a sparse sampling on the entire
sphere. As a consequence, relighting the subject with a point light or a
directional source that does not coincide exactly with one of the lights in the
stage requires interpolation and resampling the images corresponding to nearby
lights, and this leads to ghosting shadows, aliased specularities, and other
artifacts. To ameliorate these artifacts and produce better results under
arbitrary high-frequency lighting, this paper proposes a learning-based
solution for the "super-resolution" of scans of human faces taken from a light
stage. Given an arbitrary "query" light direction, our method aggregates the
captured images corresponding to neighboring lights in the stage, and uses a
neural network to synthesize a rendering of the face that appears to be
illuminated by a "virtual" light source at the query location. This neural
network must circumvent the inherent aliasing and regularity of the light stage
data that was used for training, which we accomplish through the use of
regularized traditional interpolation methods within our network. Our learned
model is able to produce renderings for arbitrary light directions that exhibit
realistic shadows and specular highlights, and is able to generalize across a
wide variety of subjects.
Related papers
- All-frequency Full-body Human Image Relighting [1.529342790344802]
Relighting of human images enables post-photography editing of lighting effects in portraits.
The current mainstream approach uses neural networks to approximate lighting effects without explicitly accounting for the principle of physical shading.
We propose a two-stage relighting method that can reproduce physically-based shadows and shading from low to high frequencies.
arXiv Detail & Related papers (2024-11-01T04:45:48Z) - Relightful Harmonization: Lighting-aware Portrait Background Replacement [23.19641174787912]
We introduce Relightful Harmonization, a lighting-aware diffusion model designed to seamlessly harmonize sophisticated lighting effect for the foreground portrait using any background image.
Our approach unfolds in three stages. First, we introduce a lighting representation module that allows our diffusion model to encode lighting information from target image background.
Second, we introduce an alignment network that aligns lighting features learned from image background with lighting features learned from panorama environment maps.
arXiv Detail & Related papers (2023-12-11T23:20:31Z) - Improving Lens Flare Removal with General Purpose Pipeline and Multiple
Light Sources Recovery [69.71080926778413]
flare artifacts can affect image visual quality and downstream computer vision tasks.
Current methods do not consider automatic exposure and tone mapping in image signal processing pipeline.
We propose a solution to improve the performance of lens flare removal by revisiting the ISP and design a more reliable light sources recovery strategy.
arXiv Detail & Related papers (2023-08-31T04:58:17Z) - Learning to Relight Portrait Images via a Virtual Light Stage and
Synthetic-to-Real Adaptation [76.96499178502759]
Relighting aims to re-illuminate the person in the image as if the person appeared in an environment with the target lighting.
Recent methods rely on deep learning to achieve high-quality results.
We propose a new approach that can perform on par with the state-of-the-art (SOTA) relighting methods without requiring a light stage.
arXiv Detail & Related papers (2022-09-21T17:15:58Z) - Neural Radiance Transfer Fields for Relightable Novel-view Synthesis
with Global Illumination [63.992213016011235]
We propose a method for scene relighting under novel views by learning a neural precomputed radiance transfer function.
Our method can be solely supervised on a set of real images of the scene under a single unknown lighting condition.
Results show that the recovered disentanglement of scene parameters improves significantly over the current state of the art.
arXiv Detail & Related papers (2022-07-27T16:07:48Z) - Universal Photometric Stereo Network using Global Lighting Contexts [4.822598110892846]
This paper tackles a new photometric stereo task, named universal photometric stereo.
It is supposed to work for objects with diverse shapes and materials under arbitrary lighting variations without assuming any specific models.
arXiv Detail & Related papers (2022-06-06T09:32:06Z) - Physically-Based Editing of Indoor Scene Lighting from a Single Image [106.60252793395104]
We present a method to edit complex indoor lighting from a single image with its predicted depth and light source segmentation masks.
We tackle this problem using two novel components: 1) a holistic scene reconstruction method that estimates scene reflectance and parametric 3D lighting, and 2) a neural rendering framework that re-renders the scene from our predictions.
arXiv Detail & Related papers (2022-05-19T06:44:37Z) - Towards Geometry Guided Neural Relighting with Flash Photography [26.511476565209026]
We propose a framework for image relighting from a single flash photograph with its corresponding depth map using deep learning.
We experimentally validate the advantage of our geometry guided approach over state-of-the-art image-based approaches in intrinsic image decomposition and image relighting.
arXiv Detail & Related papers (2020-08-12T08:03:28Z) - Neural Light Transport for Relighting and View Synthesis [70.39907425114302]
Light transport (LT) of a scene describes how it appears under different lighting and viewing directions.
We propose a semi-parametric approach to learn a neural representation of LT embedded in a texture atlas of known geometric properties.
We show how to fuse previously seen observations of illuminants and views to synthesize a new image of the same scene under a desired lighting condition.
arXiv Detail & Related papers (2020-08-09T20:13:15Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.