Reconstructing Continuous Light Field From Single Coded Image
- URL: http://arxiv.org/abs/2311.09646v1
- Date: Thu, 16 Nov 2023 07:59:01 GMT
- Title: Reconstructing Continuous Light Field From Single Coded Image
- Authors: Yuya Ishikawa and Keita Takahashi and Chihiro Tsutake and Toshiaki
Fujii
- Abstract summary: We propose a method for reconstructing a continuous light field of a target scene from a single observed image.
Joint aperture-exposure coding implemented in a camera enables effective embedding of 3-D scene information into an observed image.
NeRF-based neural rendering enables high quality view synthesis of a 3-D scene from continuous viewpoints.
- Score: 7.937367109582907
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: We propose a method for reconstructing a continuous light field of a target
scene from a single observed image. Our method takes the best of two worlds:
joint aperture-exposure coding for compressive light-field acquisition, and a
neural radiance field (NeRF) for view synthesis. Joint aperture-exposure coding
implemented in a camera enables effective embedding of 3-D scene information
into an observed image, but in previous works, it was used only for
reconstructing discretized light-field views. NeRF-based neural rendering
enables high quality view synthesis of a 3-D scene from continuous viewpoints,
but when only a single image is given as the input, it struggles to achieve
satisfactory quality. Our method integrates these two techniques into an
efficient and end-to-end trainable pipeline. Trained on a wide variety of
scenes, our method can reconstruct continuous light fields accurately and
efficiently without any test time optimization. To our knowledge, this is the
first work to bridge two worlds: camera design for efficiently acquiring 3-D
information and neural rendering.
Related papers
- Bilateral Guided Radiance Field Processing [4.816861458037213]
Neural Radiance Fields (NeRF) achieves unprecedented performance in synthesizing novel view synthesis.
Image signal processing (ISP) in modern cameras will independently enhance them, leading to "floaters" in the reconstructed radiance fields.
We propose to disentangle the enhancement by ISP at the NeRF training stage and re-apply user-desired enhancements to the reconstructed radiance fields.
We demonstrate our approach can boost the visual quality of novel view synthesis by effectively removing floaters and performing enhancements from user retouching.
arXiv Detail & Related papers (2024-06-01T14:10:45Z) - GS-Phong: Meta-Learned 3D Gaussians for Relightable Novel View Synthesis [63.5925701087252]
We propose a novel method for representing a scene illuminated by a point light using a set of relightable 3D Gaussian points.
Inspired by the Blinn-Phong model, our approach decomposes the scene into ambient, diffuse, and specular components.
To facilitate the decomposition of geometric information independent of lighting conditions, we introduce a novel bilevel optimization-based meta-learning framework.
arXiv Detail & Related papers (2024-05-31T13:48:54Z) - Time-Efficient Light-Field Acquisition Using Coded Aperture and Events [16.130950260664285]
Our method applies a sequence of coding patterns during a single exposure for an image frame.
The parallax information, which is related to the differences in coding patterns, is recorded as events.
The image frame and events, all of which are measured in a single exposure, are jointly used to computationally reconstruct a light field.
arXiv Detail & Related papers (2024-03-12T02:04:17Z) - Progressively Optimized Local Radiance Fields for Robust View Synthesis [76.55036080270347]
We present an algorithm for reconstructing the radiance field of a large-scale scene from a single casually captured video.
For handling unknown poses, we jointly estimate the camera poses with radiance field in a progressive manner.
For handling large unbounded scenes, we dynamically allocate new local radiance fields trained with frames within a temporal window.
arXiv Detail & Related papers (2023-03-24T04:03:55Z) - Multi-Plane Neural Radiance Fields for Novel View Synthesis [5.478764356647437]
Novel view synthesis is a long-standing problem that revolves around rendering frames of scenes from novel camera viewpoints.
In this work, we examine the performance, generalization, and efficiency of single-view multi-plane neural radiance fields.
We propose a new multiplane NeRF architecture that accepts multiple views to improve the synthesis results and expand the viewing range.
arXiv Detail & Related papers (2023-03-03T06:32:55Z) - SceneRF: Self-Supervised Monocular 3D Scene Reconstruction with Radiance
Fields [19.740018132105757]
SceneRF is a self-supervised monocular scene reconstruction method using only posed image sequences for training.
At inference, a single input image suffices to hallucinate novel depth views which are fused together to obtain 3D scene reconstruction.
arXiv Detail & Related papers (2022-12-05T18:59:57Z) - Neural 3D Reconstruction in the Wild [86.6264706256377]
We introduce a new method that enables efficient and accurate surface reconstruction from Internet photo collections.
We present a new benchmark and protocol for evaluating reconstruction performance on such in-the-wild scenes.
arXiv Detail & Related papers (2022-05-25T17:59:53Z) - Acquiring a Dynamic Light Field through a Single-Shot Coded Image [12.615509935080434]
We propose a method for compressively acquiring a dynamic light field (a 5-D volume) through a single-shot coded image (a 2-D measurement)
We designed an imaging model that synchronously applies aperture coding and pixel-wise exposure coding within a single exposure time.
The observed image is then fed to a convolutional neural network (CNN) for light-field reconstruction, which is jointly trained with the camera-side coding patterns.
arXiv Detail & Related papers (2022-04-26T06:00:02Z) - Enhancement of Novel View Synthesis Using Omnidirectional Image
Completion [61.78187618370681]
We present a method for synthesizing novel views from a single 360-degree RGB-D image based on the neural radiance field (NeRF)
Experiments demonstrated that the proposed method can synthesize plausible novel views while preserving the features of the scene for both artificial and real-world data.
arXiv Detail & Related papers (2022-03-18T13:49:25Z) - MVSNeRF: Fast Generalizable Radiance Field Reconstruction from
Multi-View Stereo [52.329580781898116]
We present MVSNeRF, a novel neural rendering approach that can efficiently reconstruct neural radiance fields for view synthesis.
Unlike prior works on neural radiance fields that consider per-scene optimization on densely captured images, we propose a generic deep neural network that can reconstruct radiance fields from only three nearby input views via fast network inference.
arXiv Detail & Related papers (2021-03-29T13:15:23Z) - IBRNet: Learning Multi-View Image-Based Rendering [67.15887251196894]
We present a method that synthesizes novel views of complex scenes by interpolating a sparse set of nearby views.
By drawing on source views at render time, our method hearkens back to classic work on image-based rendering.
arXiv Detail & Related papers (2021-02-25T18:56:21Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.