Assessing visual acuity in visual prostheses through a virtual-reality
system
- URL: http://arxiv.org/abs/2205.10395v1
- Date: Fri, 20 May 2022 18:24:15 GMT
- Title: Assessing visual acuity in visual prostheses through a virtual-reality
system
- Authors: Melani Sanchez-Garcia, Roberto Morollon-Ruiz, Ruben Martinez-Cantin,
Jose J. Guerrero and Eduardo Fernandez-Jover
- Abstract summary: Current visual implants still provide very low resolution and limited field of view, thus limiting visual acuity in implanted patients.
We take advantage of virtual-reality software paired with a portable head-mounted display to evaluate the performance of normally sighted participants under simulated prosthetic vision.
Our results showed that of all conditions tested, a field of view of 20deg and 1000 phosphenes of resolution proved the best, with a visual acuity of 1.3 logMAR.
- Score: 7.529227133770206
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Current visual implants still provide very low resolution and limited field
of view, thus limiting visual acuity in implanted patients. Developments of new
strategies of artificial vision simulation systems by harnessing new
advancements in technologies are of upmost priorities for the development of
new visual devices. In this work, we take advantage of virtual-reality software
paired with a portable head-mounted display and evaluated the performance of
normally sighted participants under simulated prosthetic vision with variable
field of view and number of pixels. Our simulated prosthetic vision system
allows simple experimentation in order to study the design parameters of future
visual prostheses. Ten normally sighted participants volunteered for a visual
acuity study. Subjects were required to identify computer-generated Landolt-C
gap orientation and different stimulus based on light perception,
time-resolution, light location and motion perception commonly used for visual
acuity examination in the sighted. Visual acuity scores were recorded across
different conditions of number of electrodes and size of field of view. Our
results showed that of all conditions tested, a field of view of 20{\deg} and
1000 phosphenes of resolution proved the best, with a visual acuity of 1.3
logMAR. Furthermore, performance appears to be correlated with phosphene
density, but showing a diminishing return when field of view is less than
20{\deg}. The development of new artificial vision simulation systems can be
useful to guide the development of new visual devices and the optimization of
field of view and resolution to provide a helpful and valuable visual aid to
profoundly or totally blind patients.
Related papers
- Cambrian-1: A Fully Open, Vision-Centric Exploration of Multimodal LLMs [56.391404083287235]
We introduce Cambrian-1, a family of multimodal LLMs (MLLMs) designed with a vision-centric approach.
Our study uses LLMs and visual instruction tuning as an interface to evaluate various visual representations.
We provide model weights, code, supporting tools, datasets, and detailed instruction-tuning and evaluation recipes.
arXiv Detail & Related papers (2024-06-24T17:59:42Z) - Simulation of a Vision Correction Display System [0.0]
This paper focuses on simulating a Vision Correction Display (VCD) to enhance the visual experience of individuals with various visual impairments.
With these simulations we can see potential improvements in visual acuity and comfort.
arXiv Detail & Related papers (2024-04-12T04:45:51Z) - Visual attention information can be traced on cortical response but not
on the retina: evidence from electrophysiological mouse data using natural
images as stimuli [0.0]
In primary visual cortex (V1), a subset of around $10%$ of the neurons responds differently to salient versus non-salient visual regions.
It appears that the retina remains naive concerning visual attention; cortical response gets to interpret visual attention information.
arXiv Detail & Related papers (2023-08-01T13:09:48Z) - On Human Visual Contrast Sensitivity and Machine Vision Robustness: A
Comparative Study [68.41864523774164]
How color differences affect machine vision has not been well explored.
Our work tries to bridge this gap between the human color vision aspect of visual recognition and that of the machine.
We devise a new framework in two dimensions to perform extensive analyses on the effect of color contrast and corrupted images.
arXiv Detail & Related papers (2022-12-16T18:51:41Z) - Adapting Brain-Like Neural Networks for Modeling Cortical Visual
Prostheses [68.96380145211093]
Cortical prostheses are devices implanted in the visual cortex that attempt to restore lost vision by electrically stimulating neurons.
Currently, the vision provided by these devices is limited, and accurately predicting the visual percepts resulting from stimulation is an open challenge.
We propose to address this challenge by utilizing 'brain-like' convolutional neural networks (CNNs), which have emerged as promising models of the visual system.
arXiv Detail & Related papers (2022-09-27T17:33:19Z) - A Deep Learning Approach for the Segmentation of Electroencephalography
Data in Eye Tracking Applications [56.458448869572294]
We introduce DETRtime, a novel framework for time-series segmentation of EEG data.
Our end-to-end deep learning-based framework brings advances in Computer Vision to the forefront.
Our model generalizes well in the task of EEG sleep stage segmentation.
arXiv Detail & Related papers (2022-06-17T10:17:24Z) - Peripheral Vision Transformer [52.55309200601883]
We take a biologically inspired approach and explore to model peripheral vision in deep neural networks for visual recognition.
We propose to incorporate peripheral position encoding to the multi-head self-attention layers to let the network learn to partition the visual field into diverse peripheral regions given training data.
We evaluate the proposed network, dubbed PerViT, on the large-scale ImageNet dataset and systematically investigate the inner workings of the model for machine perception.
arXiv Detail & Related papers (2022-06-14T12:47:47Z) - Behind the Machine's Gaze: Biologically Constrained Neural Networks
Exhibit Human-like Visual Attention [40.878963450471026]
We propose the Neural Visual Attention (NeVA) algorithm to generate visual scanpaths in a top-down manner.
We show that the proposed method outperforms state-of-the-art unsupervised human attention models in terms of similarity to human scanpaths.
arXiv Detail & Related papers (2022-04-19T18:57:47Z) - Deep Learning--Based Scene Simplification for Bionic Vision [0.0]
We show that object segmentation may better support scene understanding than models based on visual saliency and monocular depth estimation.
This work has the potential to drastically improve the utility of prosthetic vision for people blinded from retinal degenerative diseases.
arXiv Detail & Related papers (2021-01-30T19:35:33Z) - Neural Scene Flow Fields for Space-Time View Synthesis of Dynamic Scenes [70.76742458931935]
We introduce a new representation that models the dynamic scene as a time-variant continuous function of appearance, geometry, and 3D scene motion.
Our representation is optimized through a neural network to fit the observed input views.
We show that our representation can be used for complex dynamic scenes, including thin structures, view-dependent effects, and natural degrees of motion.
arXiv Detail & Related papers (2020-11-26T01:23:44Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.