Assessing visual acuity in visual prostheses through a virtual-reality
system
- URL: http://arxiv.org/abs/2205.10395v1
- Date: Fri, 20 May 2022 18:24:15 GMT
- Title: Assessing visual acuity in visual prostheses through a virtual-reality
system
- Authors: Melani Sanchez-Garcia, Roberto Morollon-Ruiz, Ruben Martinez-Cantin,
Jose J. Guerrero and Eduardo Fernandez-Jover
- Abstract summary: Current visual implants still provide very low resolution and limited field of view, thus limiting visual acuity in implanted patients.
We take advantage of virtual-reality software paired with a portable head-mounted display to evaluate the performance of normally sighted participants under simulated prosthetic vision.
Our results showed that of all conditions tested, a field of view of 20deg and 1000 phosphenes of resolution proved the best, with a visual acuity of 1.3 logMAR.
- Score: 7.529227133770206
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Current visual implants still provide very low resolution and limited field
of view, thus limiting visual acuity in implanted patients. Developments of new
strategies of artificial vision simulation systems by harnessing new
advancements in technologies are of upmost priorities for the development of
new visual devices. In this work, we take advantage of virtual-reality software
paired with a portable head-mounted display and evaluated the performance of
normally sighted participants under simulated prosthetic vision with variable
field of view and number of pixels. Our simulated prosthetic vision system
allows simple experimentation in order to study the design parameters of future
visual prostheses. Ten normally sighted participants volunteered for a visual
acuity study. Subjects were required to identify computer-generated Landolt-C
gap orientation and different stimulus based on light perception,
time-resolution, light location and motion perception commonly used for visual
acuity examination in the sighted. Visual acuity scores were recorded across
different conditions of number of electrodes and size of field of view. Our
results showed that of all conditions tested, a field of view of 20{\deg} and
1000 phosphenes of resolution proved the best, with a visual acuity of 1.3
logMAR. Furthermore, performance appears to be correlated with phosphene
density, but showing a diminishing return when field of view is less than
20{\deg}. The development of new artificial vision simulation systems can be
useful to guide the development of new visual devices and the optimization of
field of view and resolution to provide a helpful and valuable visual aid to
profoundly or totally blind patients.
Related papers
- When Does Perceptual Alignment Benefit Vision Representations? [76.32336818860965]
We investigate how aligning vision model representations to human perceptual judgments impacts their usability.
We find that aligning models to perceptual judgments yields representations that improve upon the original backbones across many downstream tasks.
Our results suggest that injecting an inductive bias about human perceptual knowledge into vision models can contribute to better representations.
arXiv Detail & Related papers (2024-10-14T17:59:58Z) - Low-Light Enhancement Effect on Classification and Detection: An Empirical Study [48.6762437869172]
We evaluate the impact of Low-Light Image Enhancement (LLIE) methods on high-level vision tasks.
Our findings suggest a disconnect between image enhancement for human visual perception and for machine analysis.
This insight is crucial for the development of LLIE techniques that align with the needs of both human and machine vision.
arXiv Detail & Related papers (2024-09-22T14:21:31Z) - Explore the Hallucination on Low-level Perception for MLLMs [83.12180878559295]
We aim to define and evaluate the self-awareness of MLLMs in low-level visual perception and understanding tasks.
We present QL-Bench, a benchmark settings to simulate human responses to low-level vision.
We demonstrate that while some models exhibit robust low-level visual capabilities, their self-awareness remains relatively underdeveloped.
arXiv Detail & Related papers (2024-09-15T14:38:29Z) - Simulation of a Vision Correction Display System [0.0]
This paper focuses on simulating a Vision Correction Display (VCD) to enhance the visual experience of individuals with various visual impairments.
With these simulations we can see potential improvements in visual acuity and comfort.
arXiv Detail & Related papers (2024-04-12T04:45:51Z) - Visual attention information can be traced on cortical response but not
on the retina: evidence from electrophysiological mouse data using natural
images as stimuli [0.0]
In primary visual cortex (V1), a subset of around $10%$ of the neurons responds differently to salient versus non-salient visual regions.
It appears that the retina remains naive concerning visual attention; cortical response gets to interpret visual attention information.
arXiv Detail & Related papers (2023-08-01T13:09:48Z) - ColorSense: A Study on Color Vision in Machine Visual Recognition [57.916512479603064]
We collect 110,000 non-trivial human annotations of foreground and background color labels from visual recognition benchmarks.
We validate the use of our datasets by demonstrating that the level of color discrimination has a dominating effect on the performance of machine perception models.
Our findings suggest that object recognition tasks such as classification and localization are susceptible to color vision bias.
arXiv Detail & Related papers (2022-12-16T18:51:41Z) - Adapting Brain-Like Neural Networks for Modeling Cortical Visual
Prostheses [68.96380145211093]
Cortical prostheses are devices implanted in the visual cortex that attempt to restore lost vision by electrically stimulating neurons.
Currently, the vision provided by these devices is limited, and accurately predicting the visual percepts resulting from stimulation is an open challenge.
We propose to address this challenge by utilizing 'brain-like' convolutional neural networks (CNNs), which have emerged as promising models of the visual system.
arXiv Detail & Related papers (2022-09-27T17:33:19Z) - Peripheral Vision Transformer [52.55309200601883]
We take a biologically inspired approach and explore to model peripheral vision in deep neural networks for visual recognition.
We propose to incorporate peripheral position encoding to the multi-head self-attention layers to let the network learn to partition the visual field into diverse peripheral regions given training data.
We evaluate the proposed network, dubbed PerViT, on the large-scale ImageNet dataset and systematically investigate the inner workings of the model for machine perception.
arXiv Detail & Related papers (2022-06-14T12:47:47Z) - Behind the Machine's Gaze: Biologically Constrained Neural Networks
Exhibit Human-like Visual Attention [40.878963450471026]
We propose the Neural Visual Attention (NeVA) algorithm to generate visual scanpaths in a top-down manner.
We show that the proposed method outperforms state-of-the-art unsupervised human attention models in terms of similarity to human scanpaths.
arXiv Detail & Related papers (2022-04-19T18:57:47Z) - Deep Learning--Based Scene Simplification for Bionic Vision [0.0]
We show that object segmentation may better support scene understanding than models based on visual saliency and monocular depth estimation.
This work has the potential to drastically improve the utility of prosthetic vision for people blinded from retinal degenerative diseases.
arXiv Detail & Related papers (2021-01-30T19:35:33Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.