Vis-CRF, A Classical Receptive Field Model for VISION
- URL: http://arxiv.org/abs/2011.08363v1
- Date: Tue, 17 Nov 2020 01:52:33 GMT
- Title: Vis-CRF, A Classical Receptive Field Model for VISION
- Authors: Nasim Nematzadeh, David MW Powers, Trent Lewis
- Abstract summary: The output of our retinal stage model, named Vis-CRF, is presented here for a sample of natural image.
The final tilt percept arises from multiple scale processing of Difference of Gaussians (DoG) and the perceptual interaction of foreground and background elements.
- Score: 3.2013172123155615
- License: http://creativecommons.org/publicdomain/zero/1.0/
- Abstract: Over the last decade, a variety of new neurophysiological experiments have
led to new insights as to how, when and where retinal processing takes place,
and the nature of the retinal representation encoding sent to the cortex for
further processing. Based on these neurobiological discoveries, in our previous
work, we provided computer simulation evidence to suggest that Geometrical
illusions are explained in part, by the interaction of multiscale visual
processing performed in the retina. The output of our retinal stage model,
named Vis-CRF, is presented here for a sample of natural image and for several
types of Tilt Illusion, in which the final tilt percept arises from multiple
scale processing of Difference of Gaussians (DoG) and the perceptual
interaction of foreground and background elements (Nematzadeh and Powers, 2019;
Nematzadeh, 2018; Nematzadeh, Powers and Lewis, 2017; Nematzadeh, Lewis and
Powers, 2015).
Related papers
- A Bioplausible Model for the Expanding Hole Illusion: Insights into Retinal Processing and Illusory Motion [1.6574413179773761]
The Expanding Hole Illusion challenges our understanding of how the brain processes visual information.
Recent psychophysical studies reveal that this illusion induces not only a perceptual effect but also physiological responses, such as pupil dilation.
This paper presents a computational model based on Difference of Gaussians (DoG) filtering and a classical receptive field (CRF) implementation to simulate early retinal processing.
arXiv Detail & Related papers (2025-01-15T07:03:44Z) - Aligning Neuronal Coding of Dynamic Visual Scenes with Foundation Vision Models [2.790870674964473]
We propose Vi-ST, atemporal convolutional neural network fed with a self-supervised Vision Transformer (ViT)
Our proposed Vi-ST demonstrates a novel modeling framework for neuronal coding of dynamic visual scenes in the brain.
arXiv Detail & Related papers (2024-07-15T14:06:13Z) - Event-Driven Imaging in Turbid Media: A Confluence of Optoelectronics
and Neuromorphic Computation [9.53078750806038]
A new optical-computational method is introduced to unveil images of targets whose visibility is severely obscured by light scattering in dense, turbid media.
The scheme is human vision inspired whereby diffuse photons collected from the turbid medium are first transformed to spike trains by a dynamic vision sensor as in the retina.
Image reconstruction is achieved under conditions of turbidity where an original image is unintelligible to the human eye or a digital video camera.
arXiv Detail & Related papers (2023-09-13T00:38:59Z) - GM-NeRF: Learning Generalizable Model-based Neural Radiance Fields from
Multi-view Images [79.39247661907397]
We introduce an effective framework Generalizable Model-based Neural Radiance Fields to synthesize free-viewpoint images.
Specifically, we propose a geometry-guided attention mechanism to register the appearance code from multi-view 2D images to a geometry proxy.
arXiv Detail & Related papers (2023-03-24T03:32:02Z) - A domain adaptive deep learning solution for scanpath prediction of
paintings [66.46953851227454]
This paper focuses on the eye-movement analysis of viewers during the visual experience of a certain number of paintings.
We introduce a new approach to predicting human visual attention, which impacts several cognitive functions for humans.
The proposed new architecture ingests images and returns scanpaths, a sequence of points featuring a high likelihood of catching viewers' attention.
arXiv Detail & Related papers (2022-09-22T22:27:08Z) - Human Eyes Inspired Recurrent Neural Networks are More Robust Against Adversarial Noises [7.689542442882423]
We designed a dual-stream vision model inspired by the human brain.
This model features retina-like input layers and includes two streams: one determining the next point of focus (the fixation), while the other interprets the visuals surrounding the fixation.
We evaluated this model against various benchmarks in terms of object recognition, gaze behavior and adversarial robustness.
arXiv Detail & Related papers (2022-06-15T03:44:42Z) - Peripheral Vision Transformer [52.55309200601883]
We take a biologically inspired approach and explore to model peripheral vision in deep neural networks for visual recognition.
We propose to incorporate peripheral position encoding to the multi-head self-attention layers to let the network learn to partition the visual field into diverse peripheral regions given training data.
We evaluate the proposed network, dubbed PerViT, on the large-scale ImageNet dataset and systematically investigate the inner workings of the model for machine perception.
arXiv Detail & Related papers (2022-06-14T12:47:47Z) - Prune and distill: similar reformatting of image information along rat
visual cortex and deep neural networks [61.60177890353585]
Deep convolutional neural networks (CNNs) have been shown to provide excellent models for its functional analogue in the brain, the ventral stream in visual cortex.
Here we consider some prominent statistical patterns that are known to exist in the internal representations of either CNNs or the visual cortex.
We show that CNNs and visual cortex share a similarly tight relationship between dimensionality expansion/reduction of object representations and reformatting of image information.
arXiv Detail & Related papers (2022-05-27T08:06:40Z) - NeuRegenerate: A Framework for Visualizing Neurodegeneration [10.27276267081559]
We introduce NeuRegenerate, a novel end-to-end framework for the prediction and visualization of changes in neural fiber morphology within a subject.
To predict projections, we present neuReGANerator, a deep-learning network based on cycle-consistent generative adversarial network (cycleGAN)
We show that neuReGANerator has a reconstruction accuracy of 94% in predicting neuronal structures.
arXiv Detail & Related papers (2022-02-02T16:21:14Z) - Neural Actor: Neural Free-view Synthesis of Human Actors with Pose
Control [80.79820002330457]
We propose a new method for high-quality synthesis of humans from arbitrary viewpoints and under arbitrary controllable poses.
Our method achieves better quality than the state-of-the-arts on playback as well as novel pose synthesis, and can even generalize well to new poses that starkly differ from the training poses.
arXiv Detail & Related papers (2021-06-03T17:40:48Z) - Fooling the primate brain with minimal, targeted image manipulation [67.78919304747498]
We propose an array of methods for creating minimal, targeted image perturbations that lead to changes in both neuronal activity and perception as reflected in behavior.
Our work shares the same goal with adversarial attack, namely the manipulation of images with minimal, targeted noise that leads ANN models to misclassify the images.
arXiv Detail & Related papers (2020-11-11T08:30:54Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.