Pupil-Adaptive 3D Holography Beyond Coherent Depth-of-Field
- URL: http://arxiv.org/abs/2409.00028v1
- Date: Sat, 17 Aug 2024 11:01:54 GMT
- Title: Pupil-Adaptive 3D Holography Beyond Coherent Depth-of-Field
- Authors: Yujie Wang, Baoquan Chen, Praneeth Chakravarthula,
- Abstract summary: We propose a framework that bridges the gap between the coherent depth-of-field of holographic displays and what is seen in the real world due to incoherent light.
We introduce a learning framework that adjusts the receptive fields on-the-go based on the current state of the observer's eye pupil to produce image effects that otherwise are not possible in current computer-generated holography approaches.
- Score: 42.427021878005405
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Recent holographic display approaches propelled by deep learning have shown remarkable success in enabling high-fidelity holographic projections. However, these displays have still not been able to demonstrate realistic focus cues, and a major gap still remains between the defocus effects possible with a coherent light-based holographic display and those exhibited by incoherent light in the real world. Moreover, existing methods have not considered the effects of the observer's eye pupil size variations on the perceived quality of 3D projections, especially on the defocus blur due to varying depth-of-field of the eye. In this work, we propose a framework that bridges the gap between the coherent depth-of-field of holographic displays and what is seen in the real world due to incoherent light. To this end, we investigate the effect of varying shape and motion of the eye pupil on the quality of holographic projections, and devise a method that changes the depth-of-the-field of holographic projections dynamically in a pupil-adaptive manner. Specifically, we introduce a learning framework that adjusts the receptive fields on-the-go based on the current state of the observer's eye pupil to produce image effects that otherwise are not possible in current computer-generated holography approaches. We validate the proposed method both in simulations and on an experimental prototype holographic display, and demonstrate significant improvements in the depiction of depth-of-field effects, outperforming existing approaches both qualitatively and quantitatively by at least 5 dB in peak signal-to-noise ratio.
Related papers
- Low-Light Enhancement Effect on Classification and Detection: An Empirical Study [48.6762437869172]
We evaluate the impact of Low-Light Image Enhancement (LLIE) methods on high-level vision tasks.
Our findings suggest a disconnect between image enhancement for human visual perception and for machine analysis.
This insight is crucial for the development of LLIE techniques that align with the needs of both human and machine vision.
arXiv Detail & Related papers (2024-09-22T14:21:31Z) - Data Generation Scheme for Thermal Modality with Edge-Guided Adversarial Conditional Diffusion Model [10.539491614216839]
This paper introduces a novel approach termed the edge guided conditional diffusion model.
It aims to produce meticulously aligned pseudo thermal images at the pixel level,leveraging edge information extracted from visible images.
experiments on LLVIP demonstrate ECDM s superiority over existing state-of-the-art approaches in terms of image generation quality.
arXiv Detail & Related papers (2024-08-07T13:01:10Z) - Deep Domain Adaptation: A Sim2Real Neural Approach for Improving Eye-Tracking Systems [80.62854148838359]
Eye image segmentation is a critical step in eye tracking that has great influence over the final gaze estimate.
We use dimensionality-reduction techniques to measure the overlap between the target eye images and synthetic training data.
Our methods result in robust, improved performance when tackling the discrepancy between simulation and real-world data samples.
arXiv Detail & Related papers (2024-03-23T22:32:06Z) - Curved Diffusion: A Generative Model With Optical Geometry Control [56.24220665691974]
The influence of different optical systems on the final scene appearance is frequently overlooked.
This study introduces a framework that intimately integrates a textto-image diffusion model with the particular lens used in image rendering.
arXiv Detail & Related papers (2023-11-29T13:06:48Z) - Stochastic Light Field Holography [35.73147050231529]
The Visual Turing Test is the ultimate goal to evaluate the realism of holographic displays.
Previous studies have focused on addressing challenges such as limited 'etendue and image quality over a large focal volume.
We tackle this problem with a novel hologram generation algorithm motivated by matching the projection operators of incoherent Light Field.
arXiv Detail & Related papers (2023-07-12T16:20:08Z) - Neural Point-based Volumetric Avatar: Surface-guided Neural Points for
Efficient and Photorealistic Volumetric Head Avatar [62.87222308616711]
We propose fullname (name), a method that adopts the neural point representation and the neural volume rendering process.
Specifically, the neural points are strategically constrained around the surface of the target expression via a high-resolution UV displacement map.
By design, our name is better equipped to handle topologically changing regions and thin structures while also ensuring accurate expression control when animating avatars.
arXiv Detail & Related papers (2023-07-11T03:40:10Z) - Learning Visibility Field for Detailed 3D Human Reconstruction and
Relighting [19.888346124475042]
We propose a novel sparse-view 3d human reconstruction framework that closely incorporates the occupancy field and albedo field with an additional visibility field.
Results and experiments demonstrate the effectiveness of the proposed method, as it surpasses state-of-the-art in terms of reconstruction accuracy.
arXiv Detail & Related papers (2023-04-24T08:19:03Z) - Depth-Aware Multi-Grid Deep Homography Estimation with Contextual
Correlation [38.95610086309832]
Homography estimation is an important task in computer vision, such as image stitching, video stabilization, and camera calibration.
Traditional homography estimation methods depend on the quantity and distribution of feature points, leading to poor robustness in textureless scenes.
We propose a contextual correlation layer, which can capture the long-range correlation on feature maps and flexibly be bridged in a learning framework.
We equip our network with depth perception capability, by introducing a novel depth-aware shape-preserved loss.
arXiv Detail & Related papers (2021-07-06T10:33:12Z) - Unsupervised Learning of Depth and Depth-of-Field Effect from Natural
Images with Aperture Rendering Generative Adversarial Networks [15.546533383799309]
We propose aperture rendering generative adversarial networks (AR-GANs), which equip aperture rendering on top of GANs, and adopt focus cues to learn the depth and depth-of-field effect of unlabeled natural images.
In the experiments, we demonstrate the effectiveness of AR-GANs in various datasets, such as flower, bird, and face images, demonstrate their portability by incorporating them into other 3D representation learning GANs, and validate their applicability in shallow DoF rendering.
arXiv Detail & Related papers (2021-06-24T14:15:50Z) - Face Forgery Detection by 3D Decomposition [72.22610063489248]
We consider a face image as the production of the intervention of the underlying 3D geometry and the lighting environment.
By disentangling the face image into 3D shape, common texture, identity texture, ambient light, and direct light, we find the devil lies in the direct light and the identity texture.
We propose to utilize facial detail, which is the combination of direct light and identity texture, as the clue to detect the subtle forgery patterns.
arXiv Detail & Related papers (2020-11-19T09:25:44Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.