Neural Étendue Expander for Ultra-Wide-Angle High-Fidelity Holographic Display
- URL: http://arxiv.org/abs/2109.08123v4
- Date: Sat, 27 Apr 2024 00:00:29 GMT
- Title: Neural Étendue Expander for Ultra-Wide-Angle High-Fidelity Holographic Display
- Authors: Ethan Tseng, Grace Kuo, Seung-Hwan Baek, Nathan Matsuda, Andrew Maimone, Florian Schiffers, Praneeth Chakravarthula, Qiang Fu, Wolfgang Heidrich, Douglas Lanman, Felix Heide,
- Abstract summary: Modern holographic displays possess low 'etendue, which is the product of the display area and the maximum solid angle of diffracted light.
We present neural 'etendue expanders, which are learned from a natural image dataset.
With neural 'etendue expanders, we experimentally achieve 64$times$ 'etendue expansion of natural images in full color, expanding the FOV by an order of magnitude horizontally and vertically.
- Score: 51.399291206537384
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Holographic displays can generate light fields by dynamically modulating the wavefront of a coherent beam of light using a spatial light modulator, promising rich virtual and augmented reality applications. However, the limited spatial resolution of existing dynamic spatial light modulators imposes a tight bound on the diffraction angle. As a result, modern holographic displays possess low \'{e}tendue, which is the product of the display area and the maximum solid angle of diffracted light. The low \'{e}tendue forces a sacrifice of either the field-of-view (FOV) or the display size. In this work, we lift this limitation by presenting neural \'{e}tendue expanders. This new breed of optical elements, which is learned from a natural image dataset, enables higher diffraction angles for ultra-wide FOV while maintaining both a compact form factor and the fidelity of displayed contents to human viewers. With neural \'{e}tendue expanders, we experimentally achieve 64$\times$ \'{e}tendue expansion of natural images in full color, expanding the FOV by an order of magnitude horizontally and vertically, with high-fidelity reconstruction quality (measured in PSNR) over 29 dB on retinal-resolution images.
Related papers
- HoloChrome: Polychromatic Illumination for Speckle Reduction in Holographic Near-Eye Displays [8.958725481270807]
Holographic displays hold the promise of providing authentic depth cues, resulting in enhanced immersive visual experiences for near-eye applications.
Current holographic displays are hindered by speckle noise, which limits accurate reproduction of color and texture in displayed images.
We present HoloChrome, a polychromatic holographic display framework designed to mitigate these limitations.
arXiv Detail & Related papers (2024-10-31T17:05:44Z) - Super-resolution imaging using super-oscillatory diffractive neural networks [31.825503659600702]
Super-oscillatory diffractive neural network, i.e., SODNN, can achieve super-resolved spatial resolution for imaging beyond diffraction limit.
SODNN is constructed by utilizing diffractive layers to implement optical interconnections and imaging samples or biological sensors.
Our research work will inspire the development of intelligent optical instruments to facilitate the applications of imaging, sensing, perception, etc.
arXiv Detail & Related papers (2024-06-27T12:16:35Z) - Pano-NeRF: Synthesizing High Dynamic Range Novel Views with Geometry
from Sparse Low Dynamic Range Panoramic Images [82.1477261107279]
We propose the irradiance fields from sparse LDR panoramic images to increase the observation counts for faithful geometry recovery.
Experiments demonstrate that the irradiance fields outperform state-of-the-art methods on both geometry recovery and HDR reconstruction.
arXiv Detail & Related papers (2023-12-26T08:10:22Z) - NeVRF: Neural Video-based Radiance Fields for Long-duration Sequences [53.8501224122952]
We propose a novel neural video-based radiance fields (NeVRF) representation.
NeVRF marries neural radiance field with image-based rendering to support photo-realistic novel view synthesis on long-duration dynamic inward-looking scenes.
Our experiments demonstrate the effectiveness of NeVRF in enabling long-duration sequence rendering, sequential data reconstruction, and compact data storage.
arXiv Detail & Related papers (2023-12-10T11:14:30Z) - Adaptive Shells for Efficient Neural Radiance Field Rendering [92.18962730460842]
We propose a neural radiance formulation that smoothly transitions between- and surface-based rendering.
Our approach enables efficient rendering at very high fidelity.
We also demonstrate that the extracted envelope enables downstream applications such as animation and simulation.
arXiv Detail & Related papers (2023-11-16T18:58:55Z) - VR-NeRF: High-Fidelity Virtualized Walkable Spaces [55.51127858816994]
We present an end-to-end system for the high-fidelity capture, model reconstruction, and real-time rendering of walkable spaces in virtual reality using neural radiance fields.
arXiv Detail & Related papers (2023-11-05T02:03:14Z) - Neural Distortion Fields for Spatial Calibration of Wide Field-of-View
Near-Eye Displays [7.683161309557347]
We propose a calibration method for wide Field-of-View (FoV) Near-Eye Displays (NEDs) with complex image distortions.
NDF is a fully connected deep neural network that implicitly represents display surfaces complexly distorted in spaces.
NDF calibrates an augmented reality NED with 90$circ$ FoV with about 3.23 pixel (5.8 arcmin) median error using only 8 training viewpoints.
arXiv Detail & Related papers (2022-10-22T08:48:31Z) - A Novel Light Field Coding Scheme Based on Deep Belief Network &
Weighted Binary Images for Additive Layered Displays [0.30458514384586394]
Stacking light attenuating layers is one approach to implement a light field display with a broader depth of field, wide viewing angles and high resolution.
This paper proposes a novel framework for light field representation and coding that utilizes Deep Belief Network (DBN) and weighted binary images.
arXiv Detail & Related papers (2022-10-04T08:18:06Z) - Time-multiplexed Neural Holography: A flexible framework for holographic
near-eye displays with fast heavily-quantized spatial light modulators [44.73608798155336]
Holographic near-eye displays offer unprecedented capabilities for virtual and augmented reality systems.
We report advances in camera-calibrated wave propagation models for these types of holographic near-eye displays.
Our framework is flexible in supporting runtime supervision with different types of content, including 2D and 2.5D RGBD images, 3D focal stacks, and 4D light fields.
arXiv Detail & Related papers (2022-05-05T00:03:50Z) - Light Field Reconstruction Using Convolutional Network on EPI and
Extended Applications [78.63280020581662]
A novel convolutional neural network (CNN)-based framework is developed for light field reconstruction from a sparse set of views.
We demonstrate the high performance and robustness of the proposed framework compared with state-of-the-art algorithms.
arXiv Detail & Related papers (2021-03-24T08:16:32Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.