Real-Time Computational Visual Aberration Correcting Display Through High-Contrast Inverse Blurring
- URL: http://arxiv.org/abs/2501.01450v1
- Date: Mon, 30 Dec 2024 11:15:45 GMT
- Title: Real-Time Computational Visual Aberration Correcting Display Through High-Contrast Inverse Blurring
- Authors: Akhilesh Balaji, Dhruv Ramu,
- Abstract summary: We develop a live vision-correcting display (VCD) to address refractive visual aberrations without the need for glasses or contact lenses.
We achieve this correction through deconvolution of the displayed image using a point spread function (PSF) associated with the viewer's eye.
The results of our display demonstrate significant improvements in visual clarity, achieving a structural similarity index (SSIM) of 83.04%.
- Score: 0.0
- License:
- Abstract: This paper presents a framework for developing a live vision-correcting display (VCD) to address refractive visual aberrations without the need for traditional vision correction devices like glasses or contact lenses, particularly in scenarios where wearing them may be inconvenient. We achieve this correction through deconvolution of the displayed image using a point spread function (PSF) associated with the viewer's eye. We address ringing artefacts using a masking technique applied to the prefiltered image. We also enhance the display's contrast and reduce color distortion by operating in the YUV/YCbCr color space, where deconvolution is performed solely on the luma (brightness) channel. Finally, we introduce a technique to calculate a real-time PSF that adapts based on the viewer's spherical coordinates relative to the screen. This ensures that the PSF remains accurate and undistorted even when the viewer observes the display from an angle relative to the screen normal, thereby providing consistent visual correction regardless of the viewing angle. The results of our display demonstrate significant improvements in visual clarity, achieving a structural similarity index (SSIM) of 83.04%, highlighting the effectiveness of our approach.
Related papers
- View-consistent Object Removal in Radiance Fields [14.195400035176815]
Radiance Fields (RFs) have emerged as a crucial technology for 3D scene representation.
Current methods rely on per-frame 2D image inpainting, which often fails to maintain consistency across views.
We introduce a novel RF editing pipeline that significantly enhances consistency by requiring the inpainting of only a single reference image.
arXiv Detail & Related papers (2024-08-04T17:57:23Z) - ColorVideoVDP: A visual difference predictor for image, video and display distortions [51.29162719944865]
metric is built on novel psychophysical models of chromatic contrast sensitivity and cross-channel contrast masking.
It accounts for the viewing conditions, geometric, and photometric characteristics of the display.
It was trained to predict common video streaming distortions and 8 new distortion types related to AR/VR displays.
arXiv Detail & Related papers (2024-01-21T13:16:33Z) - Curved Diffusion: A Generative Model With Optical Geometry Control [56.24220665691974]
The influence of different optical systems on the final scene appearance is frequently overlooked.
This study introduces a framework that intimately integrates a textto-image diffusion model with the particular lens used in image rendering.
arXiv Detail & Related papers (2023-11-29T13:06:48Z) - CNN Injected Transformer for Image Exposure Correction [20.282217209520006]
Previous exposure correction methods based on convolutions often produce exposure deviation in images.
We propose a CNN Injected Transformer (CIT) to harness the individual strengths of CNN and Transformer simultaneously.
In addition to the hybrid architecture design for exposure correction, we apply a set of carefully formulated loss functions to improve the spatial coherence and rectify potential color deviations.
arXiv Detail & Related papers (2023-09-08T14:53:00Z) - Differentiable Display Photometric Stereo [15.842538322034537]
Photometric stereo leverages variations in illumination conditions to reconstruct surface normals.
We present differentiable display photometric stereo (DDPS), addressing the design of display patterns.
DDPS learns the display patterns that yield accurate normal reconstruction for a target system in an end-to-end manner.
arXiv Detail & Related papers (2023-06-23T07:05:08Z) - Spatiotemporal Deformation Perception for Fisheye Video Rectification [44.332845280150785]
We propose a temporal weighting scheme to get a plausible global optical flow.
We derive the spatial deformation through the flows of fisheye and distorted-free videos.
A temporal deformation aggregator is designed to reconstruct the deformation correlation between frames.
arXiv Detail & Related papers (2023-02-08T08:17:50Z) - UIA-ViT: Unsupervised Inconsistency-Aware Method based on Vision
Transformer for Face Forgery Detection [52.91782218300844]
We propose a novel Unsupervised Inconsistency-Aware method based on Vision Transformer, called UIA-ViT.
Due to the self-attention mechanism, the attention map among patch embeddings naturally represents the consistency relation, making the vision Transformer suitable for the consistency representation learning.
arXiv Detail & Related papers (2022-10-23T15:24:47Z) - Neural Étendue Expander for Ultra-Wide-Angle High-Fidelity Holographic Display [51.399291206537384]
Modern holographic displays possess low 'etendue, which is the product of the display area and the maximum solid angle of diffracted light.
We present neural 'etendue expanders, which are learned from a natural image dataset.
With neural 'etendue expanders, we experimentally achieve 64$times$ 'etendue expansion of natural images in full color, expanding the FOV by an order of magnitude horizontally and vertically.
arXiv Detail & Related papers (2021-09-16T17:21:52Z) - Intriguing Properties of Vision Transformers [114.28522466830374]
Vision transformers (ViT) have demonstrated impressive performance across various machine vision problems.
We systematically study this question via an extensive set of experiments and comparisons with a high-performing convolutional neural network (CNN)
We show effective features of ViTs are due to flexible receptive and dynamic fields possible via the self-attention mechanism.
arXiv Detail & Related papers (2021-05-21T17:59:18Z) - Relighting Images in the Wild with a Self-Supervised Siamese
Auto-Encoder [62.580345486483886]
We propose a self-supervised method for image relighting of single view images in the wild.
The method is based on an auto-encoder which deconstructs an image into two separate encodings.
We train our model on large-scale datasets such as Youtube 8M and CelebA.
arXiv Detail & Related papers (2020-12-11T16:08:50Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.