Weakly-supervised Single-view Image Relighting
- URL: http://arxiv.org/abs/2303.13852v1
- Date: Fri, 24 Mar 2023 08:20:16 GMT
- Title: Weakly-supervised Single-view Image Relighting
- Authors: Renjiao Yi, Chenyang Zhu, Kai Xu
- Abstract summary: We present a learning-based approach to relight a single image of Lambertian and low-frequency specular objects.
Our method enables inserting objects from photographs into new scenes and relighting them under the new environment lighting.
- Score: 17.49214457620938
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We present a learning-based approach to relight a single image of Lambertian
and low-frequency specular objects. Our method enables inserting objects from
photographs into new scenes and relighting them under the new environment
lighting, which is essential for AR applications. To relight the object, we
solve both inverse rendering and re-rendering. To resolve the ill-posed inverse
rendering, we propose a weakly-supervised method by a low-rank constraint. To
facilitate the weakly-supervised training, we contribute Relit, a large-scale
(750K images) dataset of videos with aligned objects under changing
illuminations. For re-rendering, we propose a differentiable specular rendering
layer to render low-frequency non-Lambertian materials under various
illuminations of spherical harmonics. The whole pipeline is end-to-end and
efficient, allowing for a mobile app implementation of AR object insertion.
Extensive evaluations demonstrate that our method achieves state-of-the-art
performance. Project page: https://renjiaoyi.github.io/relighting/.
Related papers
- Relighting Scenes with Object Insertions in Neural Radiance Fields [24.18050535794117]
We propose a novel NeRF-based pipeline for inserting object NeRFs into scene NeRFs.
The proposed method achieves realistic relighting effects in extensive experimental evaluations.
arXiv Detail & Related papers (2024-06-21T00:58:58Z) - IllumiNeRF: 3D Relighting Without Inverse Rendering [25.642960820693947]
We show how to relight each input image using an image diffusion model conditioned on target environment lighting and estimated object geometry.
We reconstruct a Neural Radiance Field (NeRF) with these relit images, from which we render novel views under the target lighting.
We demonstrate that this strategy is surprisingly competitive and achieves state-of-the-art results on multiple relighting benchmarks.
arXiv Detail & Related papers (2024-06-10T17:59:59Z) - SHINOBI: Shape and Illumination using Neural Object Decomposition via BRDF Optimization In-the-wild [76.21063993398451]
Inverse rendering of an object based on unconstrained image collections is a long-standing challenge in computer vision and graphics.
We show that an implicit shape representation based on a multi-resolution hash encoding enables faster and robust shape reconstruction.
Our method is class-agnostic and works on in-the-wild image collections of objects to produce relightable 3D assets.
arXiv Detail & Related papers (2024-01-18T18:01:19Z) - NeFII: Inverse Rendering for Reflectance Decomposition with Near-Field
Indirect Illumination [48.42173911185454]
Inverse rendering methods aim to estimate geometry, materials and illumination from multi-view RGB images.
We propose an end-to-end inverse rendering pipeline that decomposes materials and illumination from multi-view images.
arXiv Detail & Related papers (2023-03-29T12:05:19Z) - WildLight: In-the-wild Inverse Rendering with a Flashlight [77.31815397135381]
We propose a practical photometric solution for in-the-wild inverse rendering under unknown ambient lighting.
Our system recovers scene geometry and reflectance using only multi-view images captured by a smartphone.
We demonstrate by extensive experiments that our method is easy to implement, casual to set up, and consistently outperforms existing in-the-wild inverse rendering techniques.
arXiv Detail & Related papers (2023-03-24T17:59:56Z) - Aleth-NeRF: Low-light Condition View Synthesis with Concealing Fields [65.96818069005145]
Vanilla NeRF is viewer-centred simplifies the rendering process only as light emission from 3D locations in the viewing direction.
Inspired by the emission theory of ancient Greeks, we make slight modifications on vanilla NeRF to train on multiple views of low-light scenes.
We introduce a surrogate concept, Concealing Fields, that reduces the transport of light during the volume rendering stage.
arXiv Detail & Related papers (2023-03-10T09:28:09Z) - Enhancing Low-Light Images in Real World via Cross-Image Disentanglement [58.754943762945864]
We propose a new low-light image enhancement dataset consisting of misaligned training images with real-world corruptions.
Our model achieves state-of-the-art performances on both the newly proposed dataset and other popular low-light datasets.
arXiv Detail & Related papers (2022-01-10T03:12:52Z) - Relighting Images in the Wild with a Self-Supervised Siamese
Auto-Encoder [62.580345486483886]
We propose a self-supervised method for image relighting of single view images in the wild.
The method is based on an auto-encoder which deconstructs an image into two separate encodings.
We train our model on large-scale datasets such as Youtube 8M and CelebA.
arXiv Detail & Related papers (2020-12-11T16:08:50Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.