Time-multiplexed Neural Holography: A flexible framework for holographic
near-eye displays with fast heavily-quantized spatial light modulators
- URL: http://arxiv.org/abs/2205.02367v1
- Date: Thu, 5 May 2022 00:03:50 GMT
- Title: Time-multiplexed Neural Holography: A flexible framework for holographic
near-eye displays with fast heavily-quantized spatial light modulators
- Authors: Suyeon Choi, Manu Gopakumar, Yifan (Evan) Peng, Jonghyun Kim, Matthew
O'Toole, Gordon Wetzstein
- Abstract summary: Holographic near-eye displays offer unprecedented capabilities for virtual and augmented reality systems.
We report advances in camera-calibrated wave propagation models for these types of holographic near-eye displays.
Our framework is flexible in supporting runtime supervision with different types of content, including 2D and 2.5D RGBD images, 3D focal stacks, and 4D light fields.
- Score: 44.73608798155336
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Holographic near-eye displays offer unprecedented capabilities for virtual
and augmented reality systems, including perceptually important focus cues.
Although artificial intelligence--driven algorithms for computer-generated
holography (CGH) have recently made much progress in improving the image
quality and synthesis efficiency of holograms, these algorithms are not
directly applicable to emerging phase-only spatial light modulators (SLM) that
are extremely fast but offer phase control with very limited precision. The
speed of these SLMs offers time multiplexing capabilities, essentially enabling
partially-coherent holographic display modes. Here we report advances in
camera-calibrated wave propagation models for these types of holographic
near-eye displays and we develop a CGH framework that robustly optimizes the
heavily quantized phase patterns of fast SLMs. Our framework is flexible in
supporting runtime supervision with different types of content, including 2D
and 2.5D RGBD images, 3D focal stacks, and 4D light fields. Using our
framework, we demonstrate state-of-the-art results for all of these scenarios
in simulation and experiment.
Related papers
- Rapid stochastic spatial light modulator calibration and pixel crosstalk optimisation [0.0]
Accurate calibration of the wavefront and intensity profile of the laser beam at the SLM display is key to the high fidelity of holographic potentials.
Here, we present a new calibration technique that is faster than previous methods while maintaining the same level of accuracy.
This approach allows us to measure the wavefront at the SLM to within $lambda /170$ in 5 minutes using only 10 SLM phase patterns.
arXiv Detail & Related papers (2024-08-14T17:11:50Z) - Radiance Fields from Photons [18.15183252935672]
We introduce quanta radiance fields, a class of neural radiance fields that are trained at the granularity of individual photons using single-photon cameras (SPCs)
We demonstrate, both via simulations and a prototype SPC hardware, high-fidelity reconstructions under high-speed motion, in low light, and for extreme dynamic range settings.
arXiv Detail & Related papers (2024-07-12T16:06:51Z) - Gear-NeRF: Free-Viewpoint Rendering and Tracking with Motion-aware Spatio-Temporal Sampling [70.34875558830241]
We present a way for learning a-temporal (4D) embedding, based on semantic semantic gears to allow for stratified modeling of dynamic regions of rendering the scene.
At the same time, almost for free, our tracking approach enables free-viewpoint of interest - a functionality not yet achieved by existing NeRF-based methods.
arXiv Detail & Related papers (2024-06-06T03:37:39Z) - Configurable Learned Holography [33.45219677645646]
We introduce a learned model that interactively computes 3D holograms from RGB-only 2D images for a variety of holographic displays.
We enable our hologram computations to rely on identifying the correlation between depth estimation and 3D hologram synthesis tasks.
arXiv Detail & Related papers (2024-03-24T13:57:30Z) - PASTA: Towards Flexible and Efficient HDR Imaging Via Progressively Aggregated Spatio-Temporal Alignment [91.38256332633544]
PASTA is a Progressively Aggregated Spatio-Temporal Alignment framework for HDR deghosting.
Our approach achieves effectiveness and efficiency by harnessing hierarchical representation during feature distanglement.
Experimental results showcase PASTA's superiority over current SOTA methods in both visual quality and performance metrics.
arXiv Detail & Related papers (2024-03-15T15:05:29Z) - GGRt: Towards Pose-free Generalizable 3D Gaussian Splatting in Real-time [112.32349668385635]
GGRt is a novel approach to generalizable novel view synthesis that alleviates the need for real camera poses.
As the first pose-free generalizable 3D-GS framework, GGRt achieves inference at $ge$ 5 FPS and real-time rendering at $ge$ 100 FPS.
arXiv Detail & Related papers (2024-03-15T09:47:35Z) - A Portable Multiscopic Camera for Novel View and Time Synthesis in
Dynamic Scenes [42.00094186447837]
We present a portable multiscopic camera system with a dedicated model for novel view and time synthesis in dynamic scenes.
Our goal is to render high-quality images for a dynamic scene from any viewpoint at any time using our portable multiscopic camera.
arXiv Detail & Related papers (2022-08-30T17:53:17Z) - Neural Étendue Expander for Ultra-Wide-Angle High-Fidelity Holographic Display [51.399291206537384]
Modern holographic displays possess low 'etendue, which is the product of the display area and the maximum solid angle of diffracted light.
We present neural 'etendue expanders, which are learned from a natural image dataset.
With neural 'etendue expanders, we experimentally achieve 64$times$ 'etendue expansion of natural images in full color, expanding the FOV by an order of magnitude horizontally and vertically.
arXiv Detail & Related papers (2021-09-16T17:21:52Z) - Learned holographic light transport [2.642698101441705]
Holography algorithms often fall short in matching simulations with results from a physical holographic display.
Our work addresses this mismatch by learning the holographic light transport in holographic displays.
Our method can dramatically improve simulation accuracy and image quality in holographic displays.
arXiv Detail & Related papers (2021-08-01T12:05:33Z) - TimeLens: Event-based Video Frame Interpolation [54.28139783383213]
We introduce Time Lens, a novel indicates equal contribution method that leverages the advantages of both synthesis-based and flow-based approaches.
We show an up to 5.21 dB improvement in terms of PSNR over state-of-the-art frame-based and event-based methods.
arXiv Detail & Related papers (2021-06-14T10:33:47Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.