Latent Intrinsics Emerge from Training to Relight
- URL: http://arxiv.org/abs/2405.21074v1
- Date: Fri, 31 May 2024 17:59:12 GMT
- Title: Latent Intrinsics Emerge from Training to Relight
- Authors: Xiao Zhang, William Gao, Seemandhar Jain, Michael Maire, David. A. Forsyth, Anand Bhattad,
- Abstract summary: This paper describes a relighting method that is entirely data-driven, where intrinsics and lighting are each represented as latent variables.
We show that albedo can be recovered from our latent intrinsics without using any example albedos, and that the albedos recovered are competitive with SOTA methods.
- Score: 21.766083733177652
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Image relighting is the task of showing what a scene from a source image would look like if illuminated differently. Inverse graphics schemes recover an explicit representation of geometry and a set of chosen intrinsics, then relight with some form of renderer. However error control for inverse graphics is difficult, and inverse graphics methods can represent only the effects of the chosen intrinsics. This paper describes a relighting method that is entirely data-driven, where intrinsics and lighting are each represented as latent variables. Our approach produces SOTA relightings of real scenes, as measured by standard metrics. We show that albedo can be recovered from our latent intrinsics without using any example albedos, and that the albedos recovered are competitive with SOTA methods.
Related papers
- A General Albedo Recovery Approach for Aerial Photogrammetric Images through Inverse Rendering [7.874736360019618]
This paper presents a general image formation model for albedo recovery from typical aerial photogrammetric images under natural illuminations.
Our approach builds on the fact that both the sun illumination and scene geometry are estimable in aerial photogrammetry.
arXiv Detail & Related papers (2024-09-04T18:58:32Z) - WildLight: In-the-wild Inverse Rendering with a Flashlight [77.31815397135381]
We propose a practical photometric solution for in-the-wild inverse rendering under unknown ambient lighting.
Our system recovers scene geometry and reflectance using only multi-view images captured by a smartphone.
We demonstrate by extensive experiments that our method is easy to implement, casual to set up, and consistently outperforms existing in-the-wild inverse rendering techniques.
arXiv Detail & Related papers (2023-03-24T17:59:56Z) - Geometry-aware Single-image Full-body Human Relighting [37.381122678376805]
Single-image human relighting aims to relight a target human under new lighting conditions by decomposing the input image into albedo, shape and lighting.
Previous methods suffer from both the entanglement between albedo and lighting and the lack of hard shadows.
Our framework is able to generate photo-realistic high-frequency shadows such as cast shadows under challenging lighting conditions.
arXiv Detail & Related papers (2022-07-11T10:21:02Z) - IRISformer: Dense Vision Transformers for Single-Image Inverse Rendering
in Indoor Scenes [99.76677232870192]
We show how a dense vision transformer, IRISformer, excels at both single-task and multi-task reasoning required for inverse rendering.
Specifically, we propose a transformer architecture to simultaneously estimate depths, normals, spatially-varying albedo, roughness and lighting from a single image of an indoor scene.
Our evaluations on benchmark datasets demonstrate state-of-the-art results on each of the above tasks, enabling applications like object insertion and material editing in a single unconstrained real image.
arXiv Detail & Related papers (2022-06-16T19:50:55Z) - Modeling Indirect Illumination for Inverse Rendering [31.734819333921642]
In this paper, we propose a novel approach to efficiently recovering spatially-varying indirect illumination.
The key insight is that indirect illumination can be conveniently derived from the neural radiance field learned from input images.
Experiments on both synthetic and real data demonstrate the superior performance of our approach compared to previous work.
arXiv Detail & Related papers (2022-04-14T09:10:55Z) - Self-supervised Outdoor Scene Relighting [92.20785788740407]
We propose a self-supervised approach for relighting.
Our approach is trained only on corpora of images collected from the internet without any user-supervision.
Results show the ability of our technique to produce photo-realistic and physically plausible results, that generalizes to unseen scenes.
arXiv Detail & Related papers (2021-07-07T09:46:19Z) - De-rendering the World's Revolutionary Artefacts [65.60220069214591]
We propose a method that can recover environment illumination and surface materials from real single-image collections.
We focus on rotationally symmetric artefacts that exhibit challenging surface properties including reflections, such as vases.
We present an end-to-end learning framework that is able to de-render the world's revolutionary artefacts.
arXiv Detail & Related papers (2021-04-08T17:56:16Z) - Relighting Images in the Wild with a Self-Supervised Siamese
Auto-Encoder [62.580345486483886]
We propose a self-supervised method for image relighting of single view images in the wild.
The method is based on an auto-encoder which deconstructs an image into two separate encodings.
We train our model on large-scale datasets such as Youtube 8M and CelebA.
arXiv Detail & Related papers (2020-12-11T16:08:50Z) - Deep Reflectance Volumes: Relightable Reconstructions from Multi-View
Photometric Images [59.53382863519189]
We present a deep learning approach to reconstruct scene appearance from unstructured images captured under collocated point lighting.
At the heart of Deep Reflectance Volumes is a novel volumetric scene representation consisting of opacity, surface normal and reflectance voxel grids.
We show that our learned reflectance volumes are editable, allowing for modifying the materials of the captured scenes.
arXiv Detail & Related papers (2020-07-20T05:38:11Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.