AutoColor: Learned Light Power Control for Multi-Color Holograms
- URL: http://arxiv.org/abs/2305.01611v2
- Date: Mon, 29 Jan 2024 11:16:46 GMT
- Title: AutoColor: Learned Light Power Control for Multi-Color Holograms
- Authors: Yicheng Zhan, Koray Kavakl{\i}, Hakan Urey, Qi Sun, Kaan Ak\c{s}it
- Abstract summary: Multi-color holograms rely on simultaneous illumination from multiple light sources.
We introduce AutoColor, the first learned method for estimating the optimal light source powers required for illuminating multi-color holograms.
- Score: 15.655689651318033
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: Multi-color holograms rely on simultaneous illumination from multiple light
sources. These multi-color holograms could utilize light sources better than
conventional single-color holograms and can improve the dynamic range of
holographic displays. In this letter, we introduce AutoColor , the first
learned method for estimating the optimal light source powers required for
illuminating multi-color holograms. For this purpose, we establish the first
multi-color hologram dataset using synthetic images and their depth
information. We generate these synthetic images using a trending pipeline
combining generative, large language, and monocular depth estimation models.
Finally, we train our learned model using our dataset and experimentally
demonstrate that AutoColor significantly decreases the number of steps required
to optimize multi-color holograms from > 1000 to 70 iteration steps without
compromising image quality.
Related papers
- HoloChrome: Polychromatic Illumination for Speckle Reduction in Holographic Near-Eye Displays [8.958725481270807]
Holographic displays hold the promise of providing authentic depth cues, resulting in enhanced immersive visual experiences for near-eye applications.
Current holographic displays are hindered by speckle noise, which limits accurate reproduction of color and texture in displayed images.
We present HoloChrome, a polychromatic holographic display framework designed to mitigate these limitations.
arXiv Detail & Related papers (2024-10-31T17:05:44Z) - CodeEnhance: A Codebook-Driven Approach for Low-Light Image Enhancement [97.95330185793358]
Low-light image enhancement (LLIE) aims to improve low-illumination images.
Existing methods face two challenges: uncertainty in restoration from diverse brightness degradations and loss of texture and color information.
We propose a novel enhancement approach, CodeEnhance, by leveraging quantized priors and image refinement.
arXiv Detail & Related papers (2024-04-08T07:34:39Z) - Configurable Learned Holography [33.45219677645646]
We introduce a learned model that interactively computes 3D holograms from RGB-only 2D images for a variety of holographic displays.
We enable our hologram computations to rely on identifying the correlation between depth estimation and 3D hologram synthesis tasks.
arXiv Detail & Related papers (2024-03-24T13:57:30Z) - Attentive Illumination Decomposition Model for Multi-Illuminant White
Balancing [27.950125640986805]
White balance (WB) algorithms in many commercial cameras assume single and uniform illumination.
We present a deep white balancing model that leverages the slot attention, where each slot is in charge of representing individual illuminants.
This design enables the model to generate chromaticities and weight maps for individual illuminants, which are then fused to compose the final illumination map.
arXiv Detail & Related papers (2024-02-28T12:15:29Z) - Learning to Relight Portrait Images via a Virtual Light Stage and
Synthetic-to-Real Adaptation [76.96499178502759]
Relighting aims to re-illuminate the person in the image as if the person appeared in an environment with the target lighting.
Recent methods rely on deep learning to achieve high-quality results.
We propose a new approach that can perform on par with the state-of-the-art (SOTA) relighting methods without requiring a light stage.
arXiv Detail & Related papers (2022-09-21T17:15:58Z) - Generative Models for Multi-Illumination Color Constancy [23.511249515559122]
We propose a seed (physics driven) based multi-illumination color constancy method.
GANs are exploited to model the illumination estimation problem as an image-to-image domain translation problem.
Experiments on single and multi-illumination datasets show that our methods outperform sota methods.
arXiv Detail & Related papers (2021-09-02T12:24:40Z) - Learned holographic light transport [2.642698101441705]
Holography algorithms often fall short in matching simulations with results from a physical holographic display.
Our work addresses this mismatch by learning the holographic light transport in holographic displays.
Our method can dramatically improve simulation accuracy and image quality in holographic displays.
arXiv Detail & Related papers (2021-08-01T12:05:33Z) - Degrade is Upgrade: Learning Degradation for Low-light Image Enhancement [52.49231695707198]
We investigate the intrinsic degradation and relight the low-light image while refining the details and color in two steps.
Inspired by the color image formulation, we first estimate the degradation from low-light inputs to simulate the distortion of environment illumination color, and then refine the content to recover the loss of diffuse illumination color.
Our proposed method has surpassed the SOTA by 0.95dB in PSNR on LOL1000 dataset and 3.18% in mAP on ExDark dataset.
arXiv Detail & Related papers (2021-03-19T04:00:27Z) - The Cube++ Illumination Estimation Dataset [50.58610459038332]
A new illumination estimation dataset is proposed in this paper.
It consists of 4890 images with known illumination colors as well as with additional semantic data.
The dataset can be used for training and testing of methods that perform single or two-illuminant estimation.
arXiv Detail & Related papers (2020-11-19T18:50:08Z) - Light Stage Super-Resolution: Continuous High-Frequency Relighting [58.09243542908402]
We propose a learning-based solution for the "super-resolution" of scans of human faces taken from a light stage.
Our method aggregates the captured images corresponding to neighboring lights in the stage, and uses a neural network to synthesize a rendering of the face.
Our learned model is able to produce renderings for arbitrary light directions that exhibit realistic shadows and specular highlights.
arXiv Detail & Related papers (2020-10-17T23:40:43Z) - Scene relighting with illumination estimation in the latent space on an
encoder-decoder scheme [68.8204255655161]
In this report we present methods that we tried to achieve that goal.
Our models are trained on a rendered dataset of artificial locations with varied scene content, light source location and color temperature.
With this dataset, we used a network with illumination estimation component aiming to infer and replace light conditions in the latent space representation of the concerned scenes.
arXiv Detail & Related papers (2020-06-03T15:25:11Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.