StyLitGAN: Prompting StyleGAN to Produce New Illumination Conditions
- URL: http://arxiv.org/abs/2205.10351v2
- Date: Mon, 1 May 2023 17:59:50 GMT
- Title: StyLitGAN: Prompting StyleGAN to Produce New Illumination Conditions
- Authors: Anand Bhattad and D.A. Forsyth
- Abstract summary: We propose a novel method, StyLitGAN, for relighting and resurfacing generated images in the absence of labeled data.
Our approach generates images with realistic lighting effects, including cast shadows, soft shadows, inter-reflections, and glossy effects, without the need for paired or CGI data.
- Score: 1.933681537640272
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We propose a novel method, StyLitGAN, for relighting and resurfacing
generated images in the absence of labeled data. Our approach generates images
with realistic lighting effects, including cast shadows, soft shadows,
inter-reflections, and glossy effects, without the need for paired or CGI data.
StyLitGAN uses an intrinsic image method to decompose an image, followed by a
search of the latent space of a pre-trained StyleGAN to identify a set of
directions. By prompting the model to fix one component (e.g., albedo) and vary
another (e.g., shading), we generate relighted images by adding the identified
directions to the latent style codes. Quantitative metrics of change in albedo
and lighting diversity allow us to choose effective directions using a forward
selection process. Qualitative evaluation confirms the effectiveness of our
method.
Related papers
- CodeEnhance: A Codebook-Driven Approach for Low-Light Image Enhancement [97.95330185793358]
Low-light image enhancement (LLIE) aims to improve low-illumination images.
Existing methods face two challenges: uncertainty in restoration from diverse brightness degradations and loss of texture and color information.
We propose a novel enhancement approach, CodeEnhance, by leveraging quantized priors and image refinement.
arXiv Detail & Related papers (2024-04-08T07:34:39Z) - Revealing Shadows: Low-Light Image Enhancement Using Self-Calibrated
Illumination [4.913568097686369]
Self-Calibrated Illumination (SCI) is a strategy initially developed for RGB images.
We employ the SCI method to intensify and clarify details that are typically lost in low-light conditions.
This method of selective illumination enhancement leaves the color information intact, thus preserving the color integrity of the image.
arXiv Detail & Related papers (2023-12-23T08:49:19Z) - Layered Rendering Diffusion Model for Zero-Shot Guided Image Synthesis [60.260724486834164]
This paper introduces innovative solutions to enhance spatial controllability in diffusion models reliant on text queries.
We present two key innovations: Vision Guidance and the Layered Rendering Diffusion framework.
We apply our method to three practical applications: bounding box-to-image, semantic mask-to-image and image editing.
arXiv Detail & Related papers (2023-11-30T10:36:19Z) - NeFII: Inverse Rendering for Reflectance Decomposition with Near-Field
Indirect Illumination [48.42173911185454]
Inverse rendering methods aim to estimate geometry, materials and illumination from multi-view RGB images.
We propose an end-to-end inverse rendering pipeline that decomposes materials and illumination from multi-view images.
arXiv Detail & Related papers (2023-03-29T12:05:19Z) - Designing An Illumination-Aware Network for Deep Image Relighting [69.750906769976]
We present an Illumination-Aware Network (IAN) which follows the guidance from hierarchical sampling to progressively relight a scene from a single image.
In addition, an Illumination-Aware Residual Block (IARB) is designed to approximate the physical rendering process.
Experimental results show that our proposed method produces better quantitative and qualitative relighting results than previous state-of-the-art methods.
arXiv Detail & Related papers (2022-07-21T16:21:24Z) - Degrade is Upgrade: Learning Degradation for Low-light Image Enhancement [52.49231695707198]
We investigate the intrinsic degradation and relight the low-light image while refining the details and color in two steps.
Inspired by the color image formulation, we first estimate the degradation from low-light inputs to simulate the distortion of environment illumination color, and then refine the content to recover the loss of diffuse illumination color.
Our proposed method has surpassed the SOTA by 0.95dB in PSNR on LOL1000 dataset and 3.18% in mAP on ExDark dataset.
arXiv Detail & Related papers (2021-03-19T04:00:27Z) - Shed Various Lights on a Low-Light Image: Multi-Level Enhancement Guided
by Arbitrary References [17.59529931863947]
This paper proposes a neural network for multi-level low-light image enhancement.
Inspired by style transfer, our method decomposes an image into two low-coupling feature components in the latent space.
In such a way, the network learns to extract scene-invariant and brightness-specific information from a set of image pairs.
arXiv Detail & Related papers (2021-01-04T07:38:51Z) - Style Intervention: How to Achieve Spatial Disentanglement with
Style-based Generators? [100.60938767993088]
We propose a lightweight optimization-based algorithm which could adapt to arbitrary input images and render natural translation effects under flexible objectives.
We verify the performance of the proposed framework in facial attribute editing on high-resolution images, where both photo-realism and consistency are required.
arXiv Detail & Related papers (2020-11-19T07:37:31Z) - Light Direction and Color Estimation from Single Image with Deep
Regression [25.45529007045549]
We present a method to estimate the direction and color of the scene light source from a single image.
Our method is based on two main ideas: (a) we use a new synthetic dataset with strong shadow effects with similar constraints to the SID dataset; (b) we define a deep architecture trained on the mentioned dataset to estimate the direction and color of the scene light source.
arXiv Detail & Related papers (2020-09-18T17:33:49Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.