Relighting Images in the Wild with a Self-Supervised Siamese
Auto-Encoder
- URL: http://arxiv.org/abs/2012.06444v1
- Date: Fri, 11 Dec 2020 16:08:50 GMT
- Title: Relighting Images in the Wild with a Self-Supervised Siamese
Auto-Encoder
- Authors: Yang Liu, Alexandros Neophytou, Sunando Sengupta, Eric Sommerlade
- Abstract summary: We propose a self-supervised method for image relighting of single view images in the wild.
The method is based on an auto-encoder which deconstructs an image into two separate encodings.
We train our model on large-scale datasets such as Youtube 8M and CelebA.
- Score: 62.580345486483886
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: We propose a self-supervised method for image relighting of single view
images in the wild. The method is based on an auto-encoder which deconstructs
an image into two separate encodings, relating to the scene illumination and
content, respectively. In order to disentangle this embedding information
without supervision, we exploit the assumption that some augmentation
operations do not affect the image content and only affect the direction of the
light. A novel loss function, called spherical harmonic loss, is introduced
that forces the illumination embedding to convert to a spherical harmonic
vector. We train our model on large-scale datasets such as Youtube 8M and
CelebA. Our experiments show that our method can correctly estimate scene
illumination and realistically re-light input images, without any supervision
or a prior shape model. Compared to supervised methods, our approach has
similar performance and avoids common lighting artifacts.
Related papers
- LightIt: Illumination Modeling and Control for Diffusion Models [61.80461416451116]
We introduce LightIt, a method for explicit illumination control for image generation.
Recent generative methods lack lighting control, which is crucial to numerous artistic aspects of image generation.
Our method is the first that enables the generation of images with controllable, consistent lighting.
arXiv Detail & Related papers (2024-03-15T18:26:33Z) - DiFaReli: Diffusion Face Relighting [13.000032155650835]
We present a novel approach to single-view face relighting in the wild.
Handling non-diffuse effects, such as global illumination or cast shadows, has long been a challenge in face relighting.
We achieve state-of-the-art performance on standard benchmark Multi-PIE and can photorealistically relight in-the-wild images.
arXiv Detail & Related papers (2023-04-19T08:03:20Z) - WildLight: In-the-wild Inverse Rendering with a Flashlight [77.31815397135381]
We propose a practical photometric solution for in-the-wild inverse rendering under unknown ambient lighting.
Our system recovers scene geometry and reflectance using only multi-view images captured by a smartphone.
We demonstrate by extensive experiments that our method is easy to implement, casual to set up, and consistently outperforms existing in-the-wild inverse rendering techniques.
arXiv Detail & Related papers (2023-03-24T17:59:56Z) - Weakly-supervised Single-view Image Relighting [17.49214457620938]
We present a learning-based approach to relight a single image of Lambertian and low-frequency specular objects.
Our method enables inserting objects from photographs into new scenes and relighting them under the new environment lighting.
arXiv Detail & Related papers (2023-03-24T08:20:16Z) - Physically-Based Editing of Indoor Scene Lighting from a Single Image [106.60252793395104]
We present a method to edit complex indoor lighting from a single image with its predicted depth and light source segmentation masks.
We tackle this problem using two novel components: 1) a holistic scene reconstruction method that estimates scene reflectance and parametric 3D lighting, and 2) a neural rendering framework that re-renders the scene from our predictions.
arXiv Detail & Related papers (2022-05-19T06:44:37Z) - Spatio-Temporal Outdoor Lighting Aggregation on Image Sequences using
Transformer Networks [23.6427456783115]
In this work, we focus on outdoor lighting estimation by aggregating individual noisy estimates from images.
Recent work based on deep neural networks has shown promising results for single image lighting estimation, but suffers from robustness.
We tackle this problem by combining lighting estimates from several image views sampled in the angular and temporal domain of an image sequence.
arXiv Detail & Related papers (2022-02-18T14:11:16Z) - Enhancing Low-Light Images in Real World via Cross-Image Disentanglement [58.754943762945864]
We propose a new low-light image enhancement dataset consisting of misaligned training images with real-world corruptions.
Our model achieves state-of-the-art performances on both the newly proposed dataset and other popular low-light datasets.
arXiv Detail & Related papers (2022-01-10T03:12:52Z) - SILT: Self-supervised Lighting Transfer Using Implicit Image
Decomposition [27.72518108918135]
The solution operates as a two-branch network that first aims to map input images of any arbitrary lighting style to a unified domain.
We then remap this unified input domain using a discriminator that is presented with the generated outputs and the style reference.
Our method is shown to outperform supervised relighting solutions across two different datasets without requiring lighting supervision.
arXiv Detail & Related papers (2021-10-25T12:52:53Z) - Unsupervised Low-light Image Enhancement with Decoupled Networks [103.74355338972123]
We learn a two-stage GAN-based framework to enhance the real-world low-light images in a fully unsupervised fashion.
Our proposed method outperforms the state-of-the-art unsupervised image enhancement methods in terms of both illumination enhancement and noise reduction.
arXiv Detail & Related papers (2020-05-06T13:37:08Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.