SILT: Self-supervised Lighting Transfer Using Implicit Image
Decomposition
- URL: http://arxiv.org/abs/2110.12914v1
- Date: Mon, 25 Oct 2021 12:52:53 GMT
- Title: SILT: Self-supervised Lighting Transfer Using Implicit Image
Decomposition
- Authors: Nikolina Kubiak, Armin Mustafa, Graeme Phillipson, Stephen Jolly,
Simon Hadfield
- Abstract summary: The solution operates as a two-branch network that first aims to map input images of any arbitrary lighting style to a unified domain.
We then remap this unified input domain using a discriminator that is presented with the generated outputs and the style reference.
Our method is shown to outperform supervised relighting solutions across two different datasets without requiring lighting supervision.
- Score: 27.72518108918135
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: We present SILT, a Self-supervised Implicit Lighting Transfer method. Unlike
previous research on scene relighting, we do not seek to apply arbitrary new
lighting configurations to a given scene. Instead, we wish to transfer the
lighting style from a database of other scenes, to provide a uniform lighting
style regardless of the input. The solution operates as a two-branch network
that first aims to map input images of any arbitrary lighting style to a
unified domain, with extra guidance achieved through implicit image
decomposition. We then remap this unified input domain using a discriminator
that is presented with the generated outputs and the style reference, i.e.
images of the desired illumination conditions. Our method is shown to
outperform supervised relighting solutions across two different datasets
without requiring lighting supervision.
Related papers
- LightIt: Illumination Modeling and Control for Diffusion Models [61.80461416451116]
We introduce LightIt, a method for explicit illumination control for image generation.
Recent generative methods lack lighting control, which is crucial to numerous artistic aspects of image generation.
Our method is the first that enables the generation of images with controllable, consistent lighting.
arXiv Detail & Related papers (2024-03-15T18:26:33Z) - DiLightNet: Fine-grained Lighting Control for Diffusion-based Image Generation [16.080481761005203]
We present a novel method for exerting fine-grained lighting control during text-driven image generation.
Our key observation is that we only need to guide the diffusion process, hence exact radiance hints are not necessary.
We demonstrate and validate our lighting controlled diffusion model on a variety of text prompts and lighting conditions.
arXiv Detail & Related papers (2024-02-19T08:17:21Z) - Local Relighting of Real Scenes [31.305393724281604]
We introduce the task of local relighting, which changes a photograph of a scene by switching on and off the light sources that are visible within the image.
This new task differs from the traditional image relighting problem, as it introduces the challenge of detecting light sources and inferring the pattern of light that emanates from them.
We propose an approach for local relighting that trains a model without supervision of any novel image dataset by using synthetically generated image pairs from another model.
arXiv Detail & Related papers (2022-07-06T16:08:20Z) - StyLitGAN: Prompting StyleGAN to Produce New Illumination Conditions [1.933681537640272]
We propose a novel method, StyLitGAN, for relighting and resurfacing generated images in the absence of labeled data.
Our approach generates images with realistic lighting effects, including cast shadows, soft shadows, inter-reflections, and glossy effects, without the need for paired or CGI data.
arXiv Detail & Related papers (2022-05-20T17:59:40Z) - Physically-Based Editing of Indoor Scene Lighting from a Single Image [106.60252793395104]
We present a method to edit complex indoor lighting from a single image with its predicted depth and light source segmentation masks.
We tackle this problem using two novel components: 1) a holistic scene reconstruction method that estimates scene reflectance and parametric 3D lighting, and 2) a neural rendering framework that re-renders the scene from our predictions.
arXiv Detail & Related papers (2022-05-19T06:44:37Z) - Self-supervised Outdoor Scene Relighting [92.20785788740407]
We propose a self-supervised approach for relighting.
Our approach is trained only on corpora of images collected from the internet without any user-supervision.
Results show the ability of our technique to produce photo-realistic and physically plausible results, that generalizes to unseen scenes.
arXiv Detail & Related papers (2021-07-07T09:46:19Z) - Relighting Images in the Wild with a Self-Supervised Siamese
Auto-Encoder [62.580345486483886]
We propose a self-supervised method for image relighting of single view images in the wild.
The method is based on an auto-encoder which deconstructs an image into two separate encodings.
We train our model on large-scale datasets such as Youtube 8M and CelebA.
arXiv Detail & Related papers (2020-12-11T16:08:50Z) - Light Stage Super-Resolution: Continuous High-Frequency Relighting [58.09243542908402]
We propose a learning-based solution for the "super-resolution" of scans of human faces taken from a light stage.
Our method aggregates the captured images corresponding to neighboring lights in the stage, and uses a neural network to synthesize a rendering of the face.
Our learned model is able to produce renderings for arbitrary light directions that exhibit realistic shadows and specular highlights.
arXiv Detail & Related papers (2020-10-17T23:40:43Z) - Scene relighting with illumination estimation in the latent space on an
encoder-decoder scheme [68.8204255655161]
In this report we present methods that we tried to achieve that goal.
Our models are trained on a rendered dataset of artificial locations with varied scene content, light source location and color temperature.
With this dataset, we used a network with illumination estimation component aiming to infer and replace light conditions in the latent space representation of the concerned scenes.
arXiv Detail & Related papers (2020-06-03T15:25:11Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.