Simulation of a Vision Correction Display System
- URL: http://arxiv.org/abs/2404.08238v1
- Date: Fri, 12 Apr 2024 04:45:51 GMT
- Title: Simulation of a Vision Correction Display System
- Authors: Vidya Sunil, Renu M Rameshan,
- Abstract summary: This paper focuses on simulating a Vision Correction Display (VCD) to enhance the visual experience of individuals with various visual impairments.
With these simulations we can see potential improvements in visual acuity and comfort.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Eyes serve as our primary sensory organs, responsible for processing up to 80\% of our sensory input. However, common visual aberrations like myopia and hyperopia affect a significant portion of the global population. This paper focuses on simulating a Vision Correction Display (VCD) to enhance the visual experience of individuals with various visual impairments. Utilising Blender, we digitally model the functionality of a VCD in correcting refractive errors such as myopia and hyperopia. With these simulations we can see potential improvements in visual acuity and comfort. These simulations provide valuable insights for the design and development of future VCD technologies, ultimately advancing accessibility and usability for individuals with visual challenges.
Related papers
- When Does Perceptual Alignment Benefit Vision Representations? [76.32336818860965]
We investigate how aligning vision model representations to human perceptual judgments impacts their usability.
We find that aligning models to perceptual judgments yields representations that improve upon the original backbones across many downstream tasks.
Our results suggest that injecting an inductive bias about human perceptual knowledge into vision models can contribute to better representations.
arXiv Detail & Related papers (2024-10-14T17:59:58Z) - ReVLA: Reverting Visual Domain Limitation of Robotic Foundation Models [55.07988373824348]
We study the visual generalization capabilities of three existing robotic foundation models.
Our study shows that the existing models do not exhibit robustness to visual out-of-domain scenarios.
We propose a gradual backbone reversal approach founded on model merging.
arXiv Detail & Related papers (2024-09-23T17:47:59Z) - ChromaCorrect: Prescription Correction in Virtual Reality Headsets
through Perceptual Guidance [3.365646526465954]
eyeglasses causes additional bulk and discomfort when used with augmented and virtual reality headsets.
We propose a prescription-aware rendering approach for providing sharper and immersive VR imagery.
We evaluate our approach on various displays, including desktops and VR headsets, and show significant quality and contrast improvements for users with vision impairments.
arXiv Detail & Related papers (2022-12-08T13:30:17Z) - Plausible May Not Be Faithful: Probing Object Hallucination in
Vision-Language Pre-training [66.0036211069513]
Large-scale vision-language pre-trained models are prone to hallucinate non-existent visual objects when generating text.
We show that models achieving better scores on standard metrics could hallucinate objects more frequently.
Surprisingly, we find that patch-based features perform the best and smaller patch resolution yields a non-trivial reduction in object hallucination.
arXiv Detail & Related papers (2022-10-14T10:27:22Z) - Adapting Brain-Like Neural Networks for Modeling Cortical Visual
Prostheses [68.96380145211093]
Cortical prostheses are devices implanted in the visual cortex that attempt to restore lost vision by electrically stimulating neurons.
Currently, the vision provided by these devices is limited, and accurately predicting the visual percepts resulting from stimulation is an open challenge.
We propose to address this challenge by utilizing 'brain-like' convolutional neural networks (CNNs), which have emerged as promising models of the visual system.
arXiv Detail & Related papers (2022-09-27T17:33:19Z) - Human Eyes Inspired Recurrent Neural Networks are More Robust Against Adversarial Noises [7.689542442882423]
We designed a dual-stream vision model inspired by the human brain.
This model features retina-like input layers and includes two streams: one determining the next point of focus (the fixation), while the other interprets the visuals surrounding the fixation.
We evaluated this model against various benchmarks in terms of object recognition, gaze behavior and adversarial robustness.
arXiv Detail & Related papers (2022-06-15T03:44:42Z) - Peripheral Vision Transformer [52.55309200601883]
We take a biologically inspired approach and explore to model peripheral vision in deep neural networks for visual recognition.
We propose to incorporate peripheral position encoding to the multi-head self-attention layers to let the network learn to partition the visual field into diverse peripheral regions given training data.
We evaluate the proposed network, dubbed PerViT, on the large-scale ImageNet dataset and systematically investigate the inner workings of the model for machine perception.
arXiv Detail & Related papers (2022-06-14T12:47:47Z) - Assessing visual acuity in visual prostheses through a virtual-reality
system [7.529227133770206]
Current visual implants still provide very low resolution and limited field of view, thus limiting visual acuity in implanted patients.
We take advantage of virtual-reality software paired with a portable head-mounted display to evaluate the performance of normally sighted participants under simulated prosthetic vision.
Our results showed that of all conditions tested, a field of view of 20deg and 1000 phosphenes of resolution proved the best, with a visual acuity of 1.3 logMAR.
arXiv Detail & Related papers (2022-05-20T18:24:15Z) - Florence: A New Foundation Model for Computer Vision [97.26333007250142]
We introduce a new computer vision foundation model, Florence, to expand the representations from coarse (scene) to fine (object)
By incorporating universal visual-language representations from Web-scale image-text data, our Florence model can be easily adapted for various computer vision tasks.
Florence achieves new state-of-the-art results in majority of 44 representative benchmarks.
arXiv Detail & Related papers (2021-11-22T18:59:55Z) - Deep Learning--Based Scene Simplification for Bionic Vision [0.0]
We show that object segmentation may better support scene understanding than models based on visual saliency and monocular depth estimation.
This work has the potential to drastically improve the utility of prosthetic vision for people blinded from retinal degenerative diseases.
arXiv Detail & Related papers (2021-01-30T19:35:33Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.