Visual attention information can be traced on cortical response but not
on the retina: evidence from electrophysiological mouse data using natural
images as stimuli
- URL: http://arxiv.org/abs/2308.00526v1
- Date: Tue, 1 Aug 2023 13:09:48 GMT
- Title: Visual attention information can be traced on cortical response but not
on the retina: evidence from electrophysiological mouse data using natural
images as stimuli
- Authors: Nikos Melanitis and Konstantina Nikita
- Abstract summary: In primary visual cortex (V1), a subset of around $10%$ of the neurons responds differently to salient versus non-salient visual regions.
It appears that the retina remains naive concerning visual attention; cortical response gets to interpret visual attention information.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Visual attention forms the basis of understanding the visual world. In this
work we follow a computational approach to investigate the biological basis of
visual attention. We analyze retinal and cortical electrophysiological data
from mouse. Visual Stimuli are Natural Images depicting real world scenes. Our
results show that in primary visual cortex (V1), a subset of around $10\%$ of
the neurons responds differently to salient versus non-salient visual regions.
Visual attention information was not traced in retinal response. It appears
that the retina remains naive concerning visual attention; cortical response
gets modulated to interpret visual attention information. Experimental animal
studies may be designed to further explore the biological basis of visual
attention we traced in this study. In applied and translational science, our
study contributes to the design of improved visual prostheses systems --
systems that create artificial visual percepts to visually impaired individuals
by electronic implants placed on either the retina or the cortex.
Related papers
- Deep Learning for Visual Neuroprosthesis [22.59701507351177]
The visual pathway involves complex networks of cells and regions which contribute to the encoding and processing of visual information.
This chapter discusses the importance of visual perception and the challenges associated with understanding how visual information is encoded and represented in the brain.
arXiv Detail & Related papers (2024-01-08T02:53:22Z) - Unidirectional brain-computer interface: Artificial neural network
encoding natural images to fMRI response in the visual cortex [12.1427193917406]
We propose an artificial neural network dubbed VISION to mimic the human brain and show how it can foster neuroscientific inquiries.
VISION successfully predicts human hemodynamic responses as fMRI voxel values to visual inputs with an accuracy exceeding state-of-the-art performance by 45%.
arXiv Detail & Related papers (2023-09-26T15:38:26Z) - BI AVAN: Brain inspired Adversarial Visual Attention Network [67.05560966998559]
We propose a brain-inspired adversarial visual attention network (BI-AVAN) to characterize human visual attention directly from functional brain activity.
Our model imitates the biased competition process between attention-related/neglected objects to identify and locate the visual objects in a movie frame the human brain focuses on in an unsupervised manner.
arXiv Detail & Related papers (2022-10-27T22:20:36Z) - Adapting Brain-Like Neural Networks for Modeling Cortical Visual
Prostheses [68.96380145211093]
Cortical prostheses are devices implanted in the visual cortex that attempt to restore lost vision by electrically stimulating neurons.
Currently, the vision provided by these devices is limited, and accurately predicting the visual percepts resulting from stimulation is an open challenge.
We propose to address this challenge by utilizing 'brain-like' convolutional neural networks (CNNs), which have emerged as promising models of the visual system.
arXiv Detail & Related papers (2022-09-27T17:33:19Z) - Peripheral Vision Transformer [52.55309200601883]
We take a biologically inspired approach and explore to model peripheral vision in deep neural networks for visual recognition.
We propose to incorporate peripheral position encoding to the multi-head self-attention layers to let the network learn to partition the visual field into diverse peripheral regions given training data.
We evaluate the proposed network, dubbed PerViT, on the large-scale ImageNet dataset and systematically investigate the inner workings of the model for machine perception.
arXiv Detail & Related papers (2022-06-14T12:47:47Z) - Prune and distill: similar reformatting of image information along rat
visual cortex and deep neural networks [61.60177890353585]
Deep convolutional neural networks (CNNs) have been shown to provide excellent models for its functional analogue in the brain, the ventral stream in visual cortex.
Here we consider some prominent statistical patterns that are known to exist in the internal representations of either CNNs or the visual cortex.
We show that CNNs and visual cortex share a similarly tight relationship between dimensionality expansion/reduction of object representations and reformatting of image information.
arXiv Detail & Related papers (2022-05-27T08:06:40Z) - Assessing visual acuity in visual prostheses through a virtual-reality
system [7.529227133770206]
Current visual implants still provide very low resolution and limited field of view, thus limiting visual acuity in implanted patients.
We take advantage of virtual-reality software paired with a portable head-mounted display to evaluate the performance of normally sighted participants under simulated prosthetic vision.
Our results showed that of all conditions tested, a field of view of 20deg and 1000 phosphenes of resolution proved the best, with a visual acuity of 1.3 logMAR.
arXiv Detail & Related papers (2022-05-20T18:24:15Z) - Brain-inspired algorithms for processing of visual data [5.045960549713147]
We review approaches for image processing and computer vision based on neuro-scientific findings about the functions of some neurons in the visual cortex.
We pay particular attention to the mechanisms of inhibition of the responses of some neurons, which provide the visual system with improved stability to changing input stimuli.
arXiv Detail & Related papers (2021-03-02T10:45:38Z) - What Can You Learn from Your Muscles? Learning Visual Representation
from Human Interactions [50.435861435121915]
We use human interaction and attention cues to investigate whether we can learn better representations compared to visual-only representations.
Our experiments show that our "muscly-supervised" representation outperforms a visual-only state-of-the-art method MoCo.
arXiv Detail & Related papers (2020-10-16T17:46:53Z) - VisualEchoes: Spatial Image Representation Learning through Echolocation [97.23789910400387]
Several animal species (e.g., bats, dolphins, and whales) and even visually impaired humans have the remarkable ability to perform echolocation.
We propose a novel interaction-based representation learning framework that learns useful visual features via echolocation.
Our work opens a new path for representation learning for embodied agents, where supervision comes from interacting with the physical world.
arXiv Detail & Related papers (2020-05-04T16:16:58Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.