Learned holographic light transport
- URL: http://arxiv.org/abs/2108.08253v1
- Date: Sun, 1 Aug 2021 12:05:33 GMT
- Title: Learned holographic light transport
- Authors: Koray Kavakl{\i}, Hakan Urey, Kaan Ak\c{s}it
- Abstract summary: Holography algorithms often fall short in matching simulations with results from a physical holographic display.
Our work addresses this mismatch by learning the holographic light transport in holographic displays.
Our method can dramatically improve simulation accuracy and image quality in holographic displays.
- Score: 2.642698101441705
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Computer-Generated Holography (CGH) algorithms often fall short in matching
simulations with results from a physical holographic display. Our work
addresses this mismatch by learning the holographic light transport in
holographic displays. Using a camera and a holographic display, we capture the
image reconstructions of optimized holograms that rely on ideal simulations to
generate a dataset. Inspired by the ideal simulations, we learn a
complex-valued convolution kernel that can propagate given holograms to
captured photographs in our dataset. Our method can dramatically improve
simulation accuracy and image quality in holographic displays while paving the
way for physically informed learning approaches.
Related papers
- SynFog: A Photo-realistic Synthetic Fog Dataset based on End-to-end Imaging Simulation for Advancing Real-World Defogging in Autonomous Driving [48.27575423606407]
We introduce an end-to-end simulation pipeline designed to generate photo-realistic foggy images.
We present a new synthetic fog dataset named SynFog, which features both sky light and active lighting conditions.
Experimental results demonstrate that models trained on SynFog exhibit superior performance in visual perception and detection accuracy.
arXiv Detail & Related papers (2024-03-25T18:32:41Z) - Configurable Learned Holography [33.45219677645646]
We introduce a learned model that interactively computes 3D holograms from RGB-only 2D images for a variety of holographic displays.
We enable our hologram computations to rely on identifying the correlation between depth estimation and 3D hologram synthesis tasks.
arXiv Detail & Related papers (2024-03-24T13:57:30Z) - Deep Domain Adaptation: A Sim2Real Neural Approach for Improving Eye-Tracking Systems [80.62854148838359]
Eye image segmentation is a critical step in eye tracking that has great influence over the final gaze estimate.
We use dimensionality-reduction techniques to measure the overlap between the target eye images and synthetic training data.
Our methods result in robust, improved performance when tackling the discrepancy between simulation and real-world data samples.
arXiv Detail & Related papers (2024-03-23T22:32:06Z) - Supervised Homography Learning with Realistic Dataset Generation [60.934401870005026]
We propose an iterative framework, which consists of two phases: a generation phase and a training phase.
In the generation phase, given an unlabeled image pair, we utilize the pre-estimated dominant plane masks and homography of the pair.
In the training phase, the generated data is used to train the supervised homography network.
arXiv Detail & Related papers (2023-07-28T07:03:18Z) - Stochastic Light Field Holography [35.73147050231529]
The Visual Turing Test is the ultimate goal to evaluate the realism of holographic displays.
Previous studies have focused on addressing challenges such as limited 'etendue and image quality over a large focal volume.
We tackle this problem with a novel hologram generation algorithm motivated by matching the projection operators of incoherent Light Field.
arXiv Detail & Related papers (2023-07-12T16:20:08Z) - Mimicking non-ideal instrument behavior for hologram processing using
neural style translation [0.0]
Holographic cloud probes provide unprecedented information on cloud particle density, size and position.
processing these holograms requires considerable computational resources, time and occasional human intervention.
Here we demonstrate the application of the neural style translation approach to the simulated holograms.
arXiv Detail & Related papers (2023-01-07T01:01:27Z) - Generic Lithography Modeling with Dual-band Optics-Inspired Neural
Networks [52.200624127512874]
We introduce a dual-band optics-inspired neural network design that considers the optical physics underlying lithography.
Our approach yields the first published via/metal layer contour simulation at 1nm2/pixel resolution with any tile size.
We also achieve 85X simulation speedup over traditional lithography simulator with 1% accuracy loss.
arXiv Detail & Related papers (2022-03-12T08:08:50Z) - Image quality enhancement of embedded holograms in holographic
information hiding using deep neural networks [0.0]
The brightness of an embedded hologram is set to a fraction of that of the host hologram, resulting in a barely damaged reconstructed image of the host hologram.
It is difficult to perceive because the embedded hologram's reconstructed image is darker than the reconstructed host image.
In this study, we use deep neural networks to restore the darkened image.
arXiv Detail & Related papers (2021-12-20T01:21:28Z) - Intrinsic Autoencoders for Joint Neural Rendering and Intrinsic Image
Decomposition [67.9464567157846]
We propose an autoencoder for joint generation of realistic images from synthetic 3D models while simultaneously decomposing real images into their intrinsic shape and appearance properties.
Our experiments confirm that a joint treatment of rendering and decomposition is indeed beneficial and that our approach outperforms state-of-the-art image-to-image translation baselines both qualitatively and quantitatively.
arXiv Detail & Related papers (2020-06-29T12:53:58Z) - Deep DIH : Statistically Inferred Reconstruction of Digital In-Line
Holography by Deep Learning [1.4619386068190985]
Digital in-line holography is commonly used to reconstruct 3D images from 2D holograms for microscopic objects.
In this paper, we propose a novel implementation of autoencoder-based deep learning architecture for single-shot hologram reconstruction.
arXiv Detail & Related papers (2020-04-25T20:39:25Z) - Two-shot Spatially-varying BRDF and Shape Estimation [89.29020624201708]
We propose a novel deep learning architecture with a stage-wise estimation of shape and SVBRDF.
We create a large-scale synthetic training dataset with domain-randomized geometry and realistic materials.
Experiments on both synthetic and real-world datasets show that our network trained on a synthetic dataset can generalize well to real-world images.
arXiv Detail & Related papers (2020-04-01T12:56:13Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.