Eyepiece-free pupil-optimized holographic near-eye displays
- URL: http://arxiv.org/abs/2507.22420v1
- Date: Wed, 30 Jul 2025 06:48:44 GMT
- Title: Eyepiece-free pupil-optimized holographic near-eye displays
- Authors: Jie Zhou, Shuyang Xie, Yang Wu, Lei Jiang, Yimou Luo, Jun Wang,
- Abstract summary: Computer-generated holography (CGH) represents a transformative visualization approach for next-generation immersive virtual and augmented reality (VR/AR) displays.<n>In this study, we introduce an eyepiece-free pupil-optimized holographic NED.<n>The method markedly mitigates image degradation due to finite pupil sampling and resolves inapparent depth cues induced by the spherical phase.
- Score: 28.827342633886378
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Computer-generated holography (CGH) represents a transformative visualization approach for next-generation immersive virtual and augmented reality (VR/AR) displays, enabling precise wavefront modulation and naturally providing comprehensive physiological depth cues without the need for bulky optical assemblies. Despite significant advancements in computational algorithms enhancing image quality and achieving real-time generation, practical implementations of holographic near-eye displays (NEDs) continue to face substantial challenges arising from finite and dynamically varying pupil apertures, which degrade image quality and compromise user experience. In this study, we introduce an eyepiece-free pupil-optimized holographic NED. Our proposed method employs a customized spherical phase modulation strategy to generate multiple viewpoints within the pupil, entirely eliminating the dependence on conventional optical eyepieces. Through the joint optimization of amplitude and phase distributions across these viewpoints, the method markedly mitigates image degradation due to finite pupil sampling and resolves inapparent depth cues induced by the spherical phase. The demonstrated method signifies a substantial advancement toward the realization of compact, lightweight, and flexible holographic NED systems, fulfilling stringent requirements for future VR/AR display technologies.
Related papers
- Learned Off-aperture Encoding for Wide Field-of-view RGBD Imaging [31.931929519577402]
This work explores an additional design choice by positioning a DOE off-aperture, enabling a spatial unmixing of the degrees of freedom.<n> Experimental results reveal that the off-aperture DOE enhances the imaging quality by over 5 dB in PSNR at a FoV of approximately $45circ$ when paired with a simple thin lens.
arXiv Detail & Related papers (2025-07-30T09:49:47Z) - Fovea Stacking: Imaging with Dynamic Localized Aberration Correction [13.95616328498581]
Fovea Stacking is a new type of imaging system that utilizes dynamic optical components called deformable phase plates (DPPs) for localized aberration correction anywhere on the image sensor.<n>By optimizing DPP deformations through a differentiable optical model, off-axis aberrations are corrected locally, producing a foveated image with enhanced sharpness at the fixation point - analogous to the eye's fovea.
arXiv Detail & Related papers (2025-05-31T21:15:27Z) - Generalizable Non-Line-of-Sight Imaging with Learnable Physical Priors [52.195637608631955]
Non-line-of-sight (NLOS) imaging has attracted increasing attention due to its potential applications.
Existing NLOS reconstruction approaches are constrained by the reliance on empirical physical priors.
We introduce a novel learning-based solution, comprising two key designs: Learnable Path Compensation (LPC) and Adaptive Phasor Field (APF)
arXiv Detail & Related papers (2024-09-21T04:39:45Z) - Pupil-Adaptive 3D Holography Beyond Coherent Depth-of-Field [42.427021878005405]
We propose a framework that bridges the gap between the coherent depth-of-field of holographic displays and what is seen in the real world due to incoherent light.
We introduce a learning framework that adjusts the receptive fields on-the-go based on the current state of the observer's eye pupil to produce image effects that otherwise are not possible in current computer-generated holography approaches.
arXiv Detail & Related papers (2024-08-17T11:01:54Z) - Revealing the preference for correcting separated aberrations in joint
optic-image design [19.852225245159598]
We characterize the optics with separated aberrations to achieve efficient joint design of complex systems such as smartphones and drones.
An image simulation system is presented to reproduce the genuine imaging procedure of lenses with large field-of-views.
Experiments reveal that the preference for correcting separated aberrations in joint design is as follows: longitudinal chromatic aberration, lateral chromatic aberration, spherical aberration, field curvature, and coma, with astigmatism coming last.
arXiv Detail & Related papers (2023-09-08T14:12:03Z) - Neural Point-based Volumetric Avatar: Surface-guided Neural Points for
Efficient and Photorealistic Volumetric Head Avatar [62.87222308616711]
We propose fullname (name), a method that adopts the neural point representation and the neural volume rendering process.
Specifically, the neural points are strategically constrained around the surface of the target expression via a high-resolution UV displacement map.
By design, our name is better equipped to handle topologically changing regions and thin structures while also ensuring accurate expression control when animating avatars.
arXiv Detail & Related papers (2023-07-11T03:40:10Z) - Learning to Relight Portrait Images via a Virtual Light Stage and
Synthetic-to-Real Adaptation [76.96499178502759]
Relighting aims to re-illuminate the person in the image as if the person appeared in an environment with the target lighting.
Recent methods rely on deep learning to achieve high-quality results.
We propose a new approach that can perform on par with the state-of-the-art (SOTA) relighting methods without requiring a light stage.
arXiv Detail & Related papers (2022-09-21T17:15:58Z) - Time-multiplexed Neural Holography: A flexible framework for holographic
near-eye displays with fast heavily-quantized spatial light modulators [44.73608798155336]
Holographic near-eye displays offer unprecedented capabilities for virtual and augmented reality systems.
We report advances in camera-calibrated wave propagation models for these types of holographic near-eye displays.
Our framework is flexible in supporting runtime supervision with different types of content, including 2D and 2.5D RGBD images, 3D focal stacks, and 4D light fields.
arXiv Detail & Related papers (2022-05-05T00:03:50Z) - Neural Étendue Expander for Ultra-Wide-Angle High-Fidelity Holographic Display [51.399291206537384]
Modern holographic displays possess low 'etendue, which is the product of the display area and the maximum solid angle of diffracted light.
We present neural 'etendue expanders, which are learned from a natural image dataset.
With neural 'etendue expanders, we experimentally achieve 64$times$ 'etendue expansion of natural images in full color, expanding the FOV by an order of magnitude horizontally and vertically.
arXiv Detail & Related papers (2021-09-16T17:21:52Z) - Universal and Flexible Optical Aberration Correction Using Deep-Prior
Based Deconvolution [51.274657266928315]
We propose a PSF aware plug-and-play deep network, which takes the aberrant image and PSF map as input and produces the latent high quality version via incorporating lens-specific deep priors.
Specifically, we pre-train a base model from a set of diverse lenses and then adapt it to a given lens by quickly refining the parameters.
arXiv Detail & Related papers (2021-04-07T12:00:38Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.