Flying with Photons: Rendering Novel Views of Propagating Light
- URL: http://arxiv.org/abs/2404.06493v3
- Date: Thu, 22 Aug 2024 05:43:25 GMT
- Title: Flying with Photons: Rendering Novel Views of Propagating Light
- Authors: Anagh Malik, Noah Juravsky, Ryan Po, Gordon Wetzstein, Kiriakos N. Kutulakos, David B. Lindell,
- Abstract summary: We present an imaging and neural rendering technique that seeks to synthesize videos of light propagating through a scene from novel, moving camera viewpoints.
Our approach relies on a new ultrafast imaging setup to capture a first-of-its kind, multi-viewpoint video dataset with pico-second-level temporal resolution.
- Score: 37.06220870989172
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We present an imaging and neural rendering technique that seeks to synthesize videos of light propagating through a scene from novel, moving camera viewpoints. Our approach relies on a new ultrafast imaging setup to capture a first-of-its kind, multi-viewpoint video dataset with picosecond-level temporal resolution. Combined with this dataset, we introduce an efficient neural volume rendering framework based on the transient field. This field is defined as a mapping from a 3D point and 2D direction to a high-dimensional, discrete-time signal that represents time-varying radiance at ultrafast timescales. Rendering with transient fields naturally accounts for effects due to the finite speed of light, including viewpoint-dependent appearance changes caused by light propagation delays to the camera. We render a range of complex effects, including scattering, specular reflection, refraction, and diffraction. Additionally, we demonstrate removing viewpoint-dependent propagation delays using a time warping procedure, rendering of relativistic effects, and video synthesis of direct and global components of light transport.
Related papers
- ExBluRF: Efficient Radiance Fields for Extreme Motion Blurred Images [58.24910105459957]
We present ExBluRF, a novel view synthesis method for extreme motion blurred images.
Our approach consists of two main components: 6-DOF camera trajectory-based motion blur formulation and voxel-based radiance fields.
Compared with the existing works, our approach restores much sharper 3D scenes with the order of 10 times less training time and GPU memory consumption.
arXiv Detail & Related papers (2023-09-16T11:17:25Z) - 3D Gaussian Splatting for Real-Time Radiance Field Rendering [4.320393382724066]
We introduce three key elements that allow us to achieve state-of-the-art visual quality while maintaining competitive training times.
We demonstrate state-of-the-art visual quality and real-time rendering on several established datasets.
arXiv Detail & Related papers (2023-08-08T06:37:06Z) - 3D Motion Magnification: Visualizing Subtle Motions with Time Varying
Radiance Fields [58.6780687018956]
We present a 3D motion magnification method that can magnify subtle motions from scenes captured by a moving camera.
We represent the scene with time-varying radiance fields and leverage the Eulerian principle for motion magnification.
We evaluate the effectiveness of our method on both synthetic and real-world scenes captured under various camera setups.
arXiv Detail & Related papers (2023-08-07T17:59:59Z) - Neural Free-Viewpoint Relighting for Glossy Indirect Illumination [44.32630651762033]
We show a hybrid neural-wavelet PRT solution to high-frequency indirect illumination, including glossy reflection, for relighting with changing view.
We demonstrate real-time rendering of challenging scenes involving view-dependent reflections and even caustics.
arXiv Detail & Related papers (2023-07-12T17:56:09Z) - Unsupervised Discovery and Composition of Object Light Fields [57.198174741004095]
We propose to represent objects in an object-centric, compositional scene representation as light fields.
We propose a novel light field compositor module that enables reconstructing the global light field from a set of object-centric light fields.
arXiv Detail & Related papers (2022-05-08T17:50:35Z) - Single-Photon Structured Light [31.614032717665832]
"Single-Photon Structured Light" works by sensing binary images that indicates the presence or absence of photon arrivals during each exposure.
We develop novel temporal sequences using error correction codes that are designed to be robust to short-range effects like projector and camera defocus.
Our lab prototype is capable of 3D imaging in challenging scenarios involving objects with extremely low albedo or undergoing fast motion.
arXiv Detail & Related papers (2022-04-11T17:57:04Z) - Motion-from-Blur: 3D Shape and Motion Estimation of Motion-blurred
Objects in Videos [115.71874459429381]
We propose a method for jointly estimating the 3D motion, 3D shape, and appearance of highly motion-blurred objects from a video.
Experiments on benchmark datasets demonstrate that our method outperforms previous methods for fast moving object deblurring and 3D reconstruction.
arXiv Detail & Related papers (2021-11-29T11:25:14Z) - Learning Neural Transmittance for Efficient Rendering of Reflectance
Fields [43.24427791156121]
We propose a novel method based on precomputed Neural Transmittance Functions to accelerate rendering of neural reflectance fields.
Results on real and synthetic scenes demonstrate almost two order of magnitude speedup for renderings under environment maps with minimal accuracy loss.
arXiv Detail & Related papers (2021-10-25T21:12:25Z) - Crowdsampling the Plenoptic Function [56.10020793913216]
We present a new approach to novel view synthesis under time-varying illumination from such data.
We introduce a new DeepMPI representation, motivated by observations on the sparsity structure of the plenoptic function.
Our method can synthesize the same compelling parallax and view-dependent effects as previous MPI methods, while simultaneously interpolating along changes in reflectance and illumination with time.
arXiv Detail & Related papers (2020-07-30T02:52:10Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.