Neural Light Spheres for Implicit Image Stitching and View Synthesis
- URL: http://arxiv.org/abs/2409.17924v2
- Date: Thu, 3 Oct 2024 22:57:06 GMT
- Title: Neural Light Spheres for Implicit Image Stitching and View Synthesis
- Authors: Ilya Chugunov, Amogh Joshi, Kiran Murthy, Francois Bleibel, Felix Heide,
- Abstract summary: Spherical neural light field model for implicit panoramic image stitching and re-rendering.
We show improved reconstruction quality over traditional image stitching and radiance field methods.
- Score: 32.396278546192995
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Challenging to capture, and challenging to display on a cellphone screen, the panorama paradoxically remains both a staple and underused feature of modern mobile camera applications. In this work we address both of these challenges with a spherical neural light field model for implicit panoramic image stitching and re-rendering; able to accommodate for depth parallax, view-dependent lighting, and local scene motion and color changes during capture. Fit during test-time to an arbitrary path panoramic video capture -- vertical, horizontal, random-walk -- these neural light spheres jointly estimate the camera path and a high-resolution scene reconstruction to produce novel wide field-of-view projections of the environment. Our single-layer model avoids expensive volumetric sampling, and decomposes the scene into compact view-dependent ray offset and color components, with a total model size of 80 MB per scene, and real-time (50 FPS) rendering at 1080p resolution. We demonstrate improved reconstruction quality over traditional image stitching and radiance field methods, with significantly higher tolerance to scene motion and non-ideal capture settings.
Related papers
- Cinematic Gaussians: Real-Time HDR Radiance Fields with Depth of Field [23.92087253022495]
Radiance field methods represent the state of the art in reconstructing complex scenes from multi-view photos.
Their reliance on a pinhole camera model, assuming all scene elements are in focus in the input images, presents practical challenges and complicates refocusing during novel-view synthesis.
We present a lightweight analytical based on 3D Gaussian Splatting that utilizes multi-view LDR images on varying exposure times, radiance of apertures, and focus distances as input to reconstruct a high-dynamic-range scene.
arXiv Detail & Related papers (2024-06-11T15:00:24Z) - Pix2HDR -- A pixel-wise acquisition and deep learning-based synthesis approach for high-speed HDR videos [2.275097126764287]
High-speed high dynamic range () video is challenging because the camera's frame rate restricts its dynamic range.
Existing methods sacrifice speed to acquire multi-exposure frames, yet misaligned motion in these frames can still pose for HDR fusion algorithms.
Our method greatly enhances the vision system's adaptability and performance in dynamic conditions.
arXiv Detail & Related papers (2023-10-24T19:27:35Z) - ExBluRF: Efficient Radiance Fields for Extreme Motion Blurred Images [58.24910105459957]
We present ExBluRF, a novel view synthesis method for extreme motion blurred images.
Our approach consists of two main components: 6-DOF camera trajectory-based motion blur formulation and voxel-based radiance fields.
Compared with the existing works, our approach restores much sharper 3D scenes with the order of 10 times less training time and GPU memory consumption.
arXiv Detail & Related papers (2023-09-16T11:17:25Z) - Neural Fields meet Explicit Geometric Representation for Inverse
Rendering of Urban Scenes [62.769186261245416]
We present a novel inverse rendering framework for large urban scenes capable of jointly reconstructing the scene geometry, spatially-varying materials, and HDR lighting from a set of posed RGB images with optional depth.
Specifically, we use a neural field to account for the primary rays, and use an explicit mesh (reconstructed from the underlying neural field) for modeling secondary rays that produce higher-order lighting effects such as cast shadows.
arXiv Detail & Related papers (2023-04-06T17:51:54Z) - Progressively Optimized Local Radiance Fields for Robust View Synthesis [76.55036080270347]
We present an algorithm for reconstructing the radiance field of a large-scale scene from a single casually captured video.
For handling unknown poses, we jointly estimate the camera poses with radiance field in a progressive manner.
For handling large unbounded scenes, we dynamically allocate new local radiance fields trained with frames within a temporal window.
arXiv Detail & Related papers (2023-03-24T04:03:55Z) - SPARF: Neural Radiance Fields from Sparse and Noisy Poses [58.528358231885846]
We introduce Sparse Pose Adjusting Radiance Field (SPARF) to address the challenge of novel-view synthesis.
Our approach exploits multi-view geometry constraints in order to jointly learn the NeRF and refine the camera poses.
arXiv Detail & Related papers (2022-11-21T18:57:47Z) - DynIBaR: Neural Dynamic Image-Based Rendering [79.44655794967741]
We address the problem of synthesizing novel views from a monocular video depicting a complex dynamic scene.
We adopt a volumetric image-based rendering framework that synthesizes new viewpoints by aggregating features from nearby views.
We demonstrate significant improvements over state-of-the-art methods on dynamic scene datasets.
arXiv Detail & Related papers (2022-11-20T20:57:02Z) - Mip-NeRF 360: Unbounded Anti-Aliased Neural Radiance Fields [43.69542675078766]
We present an extension of mip-NeRF that uses a non-linear scene parameterization, online distillation, and a novel distortion-based regularizer to overcome the challenges presented by unbounded scenes.
Our model, which we dub "mip-NeRF 360," reduces mean-squared error by 54% compared to mip-NeRF, and is able to produce realistic synthesized views and detailed depth maps.
arXiv Detail & Related papers (2021-11-23T18:51:18Z) - T\"oRF: Time-of-Flight Radiance Fields for Dynamic Scene View Synthesis [32.878225196378374]
We introduce a neural representation based on an image formation model for continuous-wave ToF cameras.
We show that this approach improves robustness of dynamic scene reconstruction to erroneous calibration and large motions.
arXiv Detail & Related papers (2021-09-30T17:12:59Z) - Neural Reflectance Fields for Appearance Acquisition [61.542001266380375]
We present Neural Reflectance Fields, a novel deep scene representation that encodes volume density, normal and reflectance properties at any 3D point in a scene.
We combine this representation with a physically-based differentiable ray marching framework that can render images from a neural reflectance field under any viewpoint and light.
arXiv Detail & Related papers (2020-08-09T22:04:36Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.