IRIS: Inverse Rendering of Indoor Scenes from Low Dynamic Range Images
- URL: http://arxiv.org/abs/2401.12977v3
- Date: Mon, 17 Mar 2025 20:16:19 GMT
- Title: IRIS: Inverse Rendering of Indoor Scenes from Low Dynamic Range Images
- Authors: Chih-Hao Lin, Jia-Bin Huang, Zhengqin Li, Zhao Dong, Christian Richardt, Tuotuo Li, Michael Zollhöfer, Johannes Kopf, Shenlong Wang, Changil Kim,
- Abstract summary: Inverse rendering seeks to recover 3D geometry, surface material, and lighting from captured images.<n>In response, we introduce IRIS, an inverse rendering framework that recovers the physically based material.<n>We evaluate our approach on real-world and synthetic scenes and compare it with state-of-the-art methods.
- Score: 43.82482706404381
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Inverse rendering seeks to recover 3D geometry, surface material, and lighting from captured images, enabling advanced applications such as novel-view synthesis, relighting, and virtual object insertion. However, most existing techniques rely on high dynamic range (HDR) images as input, limiting accessibility for general users. In response, we introduce IRIS, an inverse rendering framework that recovers the physically based material, spatially-varying HDR lighting, and camera response functions from multi-view, low-dynamic-range (LDR) images. By eliminating the dependence on HDR input, we make inverse rendering technology more accessible. We evaluate our approach on real-world and synthetic scenes and compare it with state-of-the-art methods. Our results show that IRIS effectively recovers HDR lighting, accurate material, and plausible camera response functions, supporting photorealistic relighting and object insertion.
Related papers
- LEDiff: Latent Exposure Diffusion for HDR Generation [11.669442066168244]
LEDiff is a method that enables a generative model with HDR content generation through latent space exposure fusion techniques.
It also functions as an LDR-to- fusion converter, expanding the dynamic range of existing low-dynamic range images.
arXiv Detail & Related papers (2024-12-19T02:15:55Z) - Relighting Scenes with Object Insertions in Neural Radiance Fields [24.18050535794117]
We propose a novel NeRF-based pipeline for inserting object NeRFs into scene NeRFs.
The proposed method achieves realistic relighting effects in extensive experimental evaluations.
arXiv Detail & Related papers (2024-06-21T00:58:58Z) - Cinematic Gaussians: Real-Time HDR Radiance Fields with Depth of Field [23.92087253022495]
Radiance field methods represent the state of the art in reconstructing complex scenes from multi-view photos.
Their reliance on a pinhole camera model, assuming all scene elements are in focus in the input images, presents practical challenges and complicates refocusing during novel-view synthesis.
We present a lightweight analytical based on 3D Gaussian Splatting that utilizes multi-view LDR images on varying exposure times, radiance of apertures, and focus distances as input to reconstruct a high-dynamic-range scene.
arXiv Detail & Related papers (2024-06-11T15:00:24Z) - Pano-NeRF: Synthesizing High Dynamic Range Novel Views with Geometry
from Sparse Low Dynamic Range Panoramic Images [82.1477261107279]
We propose the irradiance fields from sparse LDR panoramic images to increase the observation counts for faithful geometry recovery.
Experiments demonstrate that the irradiance fields outperform state-of-the-art methods on both geometry recovery and HDR reconstruction.
arXiv Detail & Related papers (2023-12-26T08:10:22Z) - Spatiotemporally Consistent HDR Indoor Lighting Estimation [66.26786775252592]
We propose a physically-motivated deep learning framework to solve the indoor lighting estimation problem.
Given a single LDR image with a depth map, our method predicts spatially consistent lighting at any given image position.
Our framework achieves photorealistic lighting prediction with higher quality compared to state-of-the-art single-image or video-based methods.
arXiv Detail & Related papers (2023-05-07T20:36:29Z) - Neural Fields meet Explicit Geometric Representation for Inverse
Rendering of Urban Scenes [62.769186261245416]
We present a novel inverse rendering framework for large urban scenes capable of jointly reconstructing the scene geometry, spatially-varying materials, and HDR lighting from a set of posed RGB images with optional depth.
Specifically, we use a neural field to account for the primary rays, and use an explicit mesh (reconstructed from the underlying neural field) for modeling secondary rays that produce higher-order lighting effects such as cast shadows.
arXiv Detail & Related papers (2023-04-06T17:51:54Z) - ENVIDR: Implicit Differentiable Renderer with Neural Environment
Lighting [9.145875902703345]
We introduce ENVIDR, a rendering and modeling framework for high-quality rendering and reconstruction of surfaces with challenging specular reflections.
We first propose a novel neural with decomposed rendering to learn the interaction between surface and environment lighting.
We then propose an SDF-based neural surface model that leverages this learned neural to represent general scenes.
arXiv Detail & Related papers (2023-03-23T04:12:07Z) - MAIR: Multi-view Attention Inverse Rendering with 3D Spatially-Varying
Lighting Estimation [13.325800282424598]
We propose a scene-level inverse rendering framework that uses multi-view images to decompose the scene into geometry, a SVBRDF, and 3D spatially-varying lighting.
Our experiments show that the proposed method achieves better performance than single-view-based methods, but also achieves robust performance on unseen real-world scene.
arXiv Detail & Related papers (2023-03-22T08:07:28Z) - GlowGAN: Unsupervised Learning of HDR Images from LDR Images in the Wild [74.52723408793648]
We present the first method for learning a generative model of HDR images from in-the-wild LDR image collections in a fully unsupervised manner.
The key idea is to train a generative adversarial network (GAN) to generate HDR images which, when projected to LDR under various exposures, are indistinguishable from real LDR images.
Experiments show that our method GlowGAN can synthesize photorealistic HDR images in many challenging cases such as landscapes, lightning, or windows.
arXiv Detail & Related papers (2022-11-22T15:42:08Z) - DynIBaR: Neural Dynamic Image-Based Rendering [79.44655794967741]
We address the problem of synthesizing novel views from a monocular video depicting a complex dynamic scene.
We adopt a volumetric image-based rendering framework that synthesizes new viewpoints by aggregating features from nearby views.
We demonstrate significant improvements over state-of-the-art methods on dynamic scene datasets.
arXiv Detail & Related papers (2022-11-20T20:57:02Z) - PhotoScene: Photorealistic Material and Lighting Transfer for Indoor
Scenes [84.66946637534089]
PhotoScene is a framework that takes input image(s) of a scene and builds a photorealistic digital twin with high-quality materials and similar lighting.
We model scene materials using procedural material graphs; such graphs represent photorealistic and resolution-independent materials.
We evaluate our technique on objects and layout reconstructions from ScanNet, SUN RGB-D and stock photographs, and demonstrate that our method reconstructs high-quality, fully relightable 3D scenes.
arXiv Detail & Related papers (2022-07-02T06:52:44Z) - Learning Indoor Inverse Rendering with 3D Spatially-Varying Lighting [149.1673041605155]
We address the problem of jointly estimating albedo, normals, depth and 3D spatially-varying lighting from a single image.
Most existing methods formulate the task as image-to-image translation, ignoring the 3D properties of the scene.
We propose a unified, learning-based inverse framework that formulates 3D spatially-varying lighting.
arXiv Detail & Related papers (2021-09-13T15:29:03Z) - Neural Reflectance Fields for Appearance Acquisition [61.542001266380375]
We present Neural Reflectance Fields, a novel deep scene representation that encodes volume density, normal and reflectance properties at any 3D point in a scene.
We combine this representation with a physically-based differentiable ray marching framework that can render images from a neural reflectance field under any viewpoint and light.
arXiv Detail & Related papers (2020-08-09T22:04:36Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.