Stochastic Light Field Holography
- URL: http://arxiv.org/abs/2307.06277v1
- Date: Wed, 12 Jul 2023 16:20:08 GMT
- Title: Stochastic Light Field Holography
- Authors: Florian Schiffers, Praneeth Chakravarthula, Nathan Matsuda, Grace Kuo,
Ethan Tseng, Douglas Lanman, Felix Heide, Oliver Cossairt
- Abstract summary: The Visual Turing Test is the ultimate goal to evaluate the realism of holographic displays.
Previous studies have focused on addressing challenges such as limited 'etendue and image quality over a large focal volume.
We tackle this problem with a novel hologram generation algorithm motivated by matching the projection operators of incoherent Light Field.
- Score: 35.73147050231529
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: The Visual Turing Test is the ultimate goal to evaluate the realism of
holographic displays. Previous studies have focused on addressing challenges
such as limited \'etendue and image quality over a large focal volume, but they
have not investigated the effect of pupil sampling on the viewing experience in
full 3D holograms. In this work, we tackle this problem with a novel hologram
generation algorithm motivated by matching the projection operators of
incoherent Light Field and coherent Wigner Function light transport. To this
end, we supervise hologram computation using synthesized photographs, which are
rendered on-the-fly using Light Field refocusing from stochastically sampled
pupil states during optimization. The proposed method produces holograms with
correct parallax and focus cues, which are important for passing the Visual
Turing Test. We validate that our approach compares favorably to
state-of-the-art CGH algorithms that use Light Field and Focal Stack
supervision. Our experiments demonstrate that our algorithm significantly
improves the realism of the viewing experience for a variety of different pupil
states.
Related papers
- Low-Light Enhancement Effect on Classification and Detection: An Empirical Study [48.6762437869172]
We evaluate the impact of Low-Light Image Enhancement (LLIE) methods on high-level vision tasks.
Our findings suggest a disconnect between image enhancement for human visual perception and for machine analysis.
This insight is crucial for the development of LLIE techniques that align with the needs of both human and machine vision.
arXiv Detail & Related papers (2024-09-22T14:21:31Z) - Pupil-Adaptive 3D Holography Beyond Coherent Depth-of-Field [42.427021878005405]
We propose a framework that bridges the gap between the coherent depth-of-field of holographic displays and what is seen in the real world due to incoherent light.
We introduce a learning framework that adjusts the receptive fields on-the-go based on the current state of the observer's eye pupil to produce image effects that otherwise are not possible in current computer-generated holography approaches.
arXiv Detail & Related papers (2024-08-17T11:01:54Z) - GS-Phong: Meta-Learned 3D Gaussians for Relightable Novel View Synthesis [63.5925701087252]
We propose a novel method for representing a scene illuminated by a point light using a set of relightable 3D Gaussian points.
Inspired by the Blinn-Phong model, our approach decomposes the scene into ambient, diffuse, and specular components.
To facilitate the decomposition of geometric information independent of lighting conditions, we introduce a novel bilevel optimization-based meta-learning framework.
arXiv Detail & Related papers (2024-05-31T13:48:54Z) - Holo-VQVAE: VQ-VAE for phase-only holograms [1.534667887016089]
Holography stands at the forefront of visual technology innovation, offering immersive, three-dimensional visualizations through the manipulation of light wave amplitude and phase.
Modern research in hologram generation has predominantly focused on image-to-hologram conversion, producing holograms from existing images.
We present Holo-VQVAE, a novel generative framework tailored for phase-only holograms (POHs)
arXiv Detail & Related papers (2024-03-29T15:27:28Z) - Neural Radiance Transfer Fields for Relightable Novel-view Synthesis
with Global Illumination [63.992213016011235]
We propose a method for scene relighting under novel views by learning a neural precomputed radiance transfer function.
Our method can be solely supervised on a set of real images of the scene under a single unknown lighting condition.
Results show that the recovered disentanglement of scene parameters improves significantly over the current state of the art.
arXiv Detail & Related papers (2022-07-27T16:07:48Z) - Time-multiplexed Neural Holography: A flexible framework for holographic
near-eye displays with fast heavily-quantized spatial light modulators [44.73608798155336]
Holographic near-eye displays offer unprecedented capabilities for virtual and augmented reality systems.
We report advances in camera-calibrated wave propagation models for these types of holographic near-eye displays.
Our framework is flexible in supporting runtime supervision with different types of content, including 2D and 2.5D RGBD images, 3D focal stacks, and 4D light fields.
arXiv Detail & Related papers (2022-05-05T00:03:50Z) - Learned holographic light transport [2.642698101441705]
Holography algorithms often fall short in matching simulations with results from a physical holographic display.
Our work addresses this mismatch by learning the holographic light transport in holographic displays.
Our method can dramatically improve simulation accuracy and image quality in holographic displays.
arXiv Detail & Related papers (2021-08-01T12:05:33Z) - A Study on Visual Perception of Light Field Content [19.397619552417986]
We present a visual attention study on light field content.
We conducted perception experiments displaying them to users in various ways.
Our analysis highlights characteristics of user behaviour in light field imaging applications.
arXiv Detail & Related papers (2020-08-07T14:23:27Z) - Self-Supervised Linear Motion Deblurring [112.75317069916579]
Deep convolutional neural networks are state-of-the-art for image deblurring.
We present a differentiable reblur model for self-supervised motion deblurring.
Our experiments demonstrate that self-supervised single image deblurring is really feasible.
arXiv Detail & Related papers (2020-02-10T20:15:21Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.