Inverse Image-Based Rendering for Light Field Generation from Single Images
- URL: http://arxiv.org/abs/2510.20132v1
- Date: Thu, 23 Oct 2025 02:12:45 GMT
- Title: Inverse Image-Based Rendering for Light Field Generation from Single Images
- Authors: Hyunjun Jung, Hae-Gon Jeon,
- Abstract summary: We propose a novel view synthesis method for light field generation from only single images, named inverse image-based rendering.<n>Our method reconstructs light flows in a space from image pixels, which behaves in the opposite way to image-based rendering.<n>Our neural first stores the light flow of source rays from the input image, then computes the relationships among them through cross-attention.
- Score: 30.856397422416517
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: A concept of light-fields computed from multiple view images on regular grids has proven its benefit for scene representations, and supported realistic renderings of novel views and photographic effects such as refocusing and shallow depth of field. In spite of its effectiveness of light flow computations, obtaining light fields requires either computational costs or specialized devices like a bulky camera setup and a specialized microlens array. In an effort to broaden its benefit and applicability, in this paper, we propose a novel view synthesis method for light field generation from only single images, named inverse image-based rendering. Unlike previous attempts to implicitly rebuild 3D geometry or to explicitly represent objective scenes, our method reconstructs light flows in a space from image pixels, which behaves in the opposite way to image-based rendering. To accomplish this, we design a neural rendering pipeline to render a target ray in an arbitrary viewpoint. Our neural renderer first stores the light flow of source rays from the input image, then computes the relationships among them through cross-attention, and finally predicts the color of the target ray based on these relationships. After the rendering pipeline generates the first novel view from a single input image, the generated out-of-view contents are updated to the set of source rays. This procedure is iteratively performed while ensuring the consistent generation of occluded contents. We demonstrate that our inverse image-based rendering works well with various challenging datasets without any retraining or finetuning after once trained on synthetic dataset, and outperforms relevant state-of-the-art novel view synthesis methods.
Related papers
- 3DPR: Single Image 3D Portrait Relight using Generative Priors [101.74130664920868]
3DPR is an image-based relighting model that leverages generative priors learnt from multi-view One-Light-at-A-Time (OLAT) images.<n>We leverage the latent space of a pre-trained generative head model that provides a rich prior over face geometry learnt from in-the-wild image datasets.<n>Our reflectance network operates in the latent space of the generative head model, crucially enabling a relatively small number of lightstage images to train the reflectance model.
arXiv Detail & Related papers (2025-10-17T17:37:42Z) - Materialist: Physically Based Editing Using Single-Image Inverse Rendering [47.85234717907478]
Materialist is a method combining a learning-based approach with physically based progressive differentiable rendering.<n>Our approach enables a range of applications, including material editing, object insertion, and relighting.<n> Experiments demonstrate strong performance across synthetic and real-world datasets.
arXiv Detail & Related papers (2025-01-07T11:52:01Z) - IllumiNeRF: 3D Relighting Without Inverse Rendering [25.642960820693947]
We show how to relight each input image using an image diffusion model conditioned on target environment lighting and estimated object geometry.
We reconstruct a Neural Radiance Field (NeRF) with these relit images, from which we render novel views under the target lighting.
We demonstrate that this strategy is surprisingly competitive and achieves state-of-the-art results on multiple relighting benchmarks.
arXiv Detail & Related papers (2024-06-10T17:59:59Z) - Learning to Render Novel Views from Wide-Baseline Stereo Pairs [26.528667940013598]
We introduce a method for novel view synthesis given only a single wide-baseline stereo image pair.
Existing approaches to novel view synthesis from sparse observations fail due to recovering incorrect 3D geometry.
We propose an efficient, image-space epipolar line sampling scheme to assemble image features for a target ray.
arXiv Detail & Related papers (2023-04-17T17:40:52Z) - NDJIR: Neural Direct and Joint Inverse Rendering for Geometry, Lights,
and Materials of Real Object [5.665283675533071]
We propose neural direct and joint inverse rendering, NDJIR.
Our proposed method can decompose semantically well for real object in photogrammetric setting.
arXiv Detail & Related papers (2023-02-02T13:21:03Z) - Physically-Based Editing of Indoor Scene Lighting from a Single Image [106.60252793395104]
We present a method to edit complex indoor lighting from a single image with its predicted depth and light source segmentation masks.
We tackle this problem using two novel components: 1) a holistic scene reconstruction method that estimates scene reflectance and parametric 3D lighting, and 2) a neural rendering framework that re-renders the scene from our predictions.
arXiv Detail & Related papers (2022-05-19T06:44:37Z) - Extracting Triangular 3D Models, Materials, and Lighting From Images [59.33666140713829]
We present an efficient method for joint optimization of materials and lighting from multi-view image observations.
We leverage meshes with spatially-varying materials and environment that can be deployed in any traditional graphics engine.
arXiv Detail & Related papers (2021-11-24T13:58:20Z) - DIB-R++: Learning to Predict Lighting and Material with a Hybrid
Differentiable Renderer [78.91753256634453]
We consider the challenging problem of predicting intrinsic object properties from a single image by exploiting differentiables.
In this work, we propose DIBR++, a hybrid differentiable which supports these effects by combining specularization and ray-tracing.
Compared to more advanced physics-based differentiables, DIBR++ is highly performant due to its compact and expressive model.
arXiv Detail & Related papers (2021-10-30T01:59:39Z) - IBRNet: Learning Multi-View Image-Based Rendering [67.15887251196894]
We present a method that synthesizes novel views of complex scenes by interpolating a sparse set of nearby views.
By drawing on source views at render time, our method hearkens back to classic work on image-based rendering.
arXiv Detail & Related papers (2021-02-25T18:56:21Z) - Neural Reflectance Fields for Appearance Acquisition [61.542001266380375]
We present Neural Reflectance Fields, a novel deep scene representation that encodes volume density, normal and reflectance properties at any 3D point in a scene.
We combine this representation with a physically-based differentiable ray marching framework that can render images from a neural reflectance field under any viewpoint and light.
arXiv Detail & Related papers (2020-08-09T22:04:36Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.