ReWaRD: Retinal Waves for Pre-Training Artificial Neural Networks
Mimicking Real Prenatal Development
- URL: http://arxiv.org/abs/2311.17232v1
- Date: Tue, 28 Nov 2023 21:14:05 GMT
- Title: ReWaRD: Retinal Waves for Pre-Training Artificial Neural Networks
Mimicking Real Prenatal Development
- Authors: Benjamin Cappell and Andreas Stoll and Williams Chukwudi Umah and
Bernhard Egger
- Abstract summary: Pre- and post-natal retinal waves suggest to be a pre-training mechanism for the primate visual system.
We build a computational model that mimics this development mechanism by pre-training different artificial convolutional neural networks.
The resulting features of this biologically plausible pre-training closely match the V1 features of the primate visual system.
- Score: 5.222115919729418
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Computational models trained on a large amount of natural images are the
state-of-the-art to study human vision - usually adult vision. Computational
models of infant vision and its further development are gaining more and more
attention in the community. In this work we aim at the very beginning of our
visual experience - pre- and post-natal retinal waves which suggest to be a
pre-training mechanism for the primate visual system at a very early stage of
development. We see this approach as an instance of biologically plausible data
driven inductive bias through pre-training. We built a computational model that
mimics this development mechanism by pre-training different artificial
convolutional neural networks with simulated retinal wave images. The resulting
features of this biologically plausible pre-training closely match the V1
features of the primate visual system. We show that the performance gain by
pre-training with retinal waves is similar to a state-of-the art pre-training
pipeline. Our framework contains the retinal wave generator, as well as a
training strategy, which can be a first step in a curriculum learning based
training diet for various models of development. We release code, data and
trained networks to build the basis for future work on visual development and
based on a curriculum learning approach including prenatal development to
support studies of innate vs. learned properties of the primate visual system.
An additional benefit of our pre-trained networks for neuroscience or computer
vision applications is the absence of biases inherited from datasets like
ImageNet.
Related papers
- Brain-like representational straightening of natural movies in robust
feedforward neural networks [2.8749107965043286]
Representational straightening refers to a decrease in curvature of visual feature representations of a sequence of frames taken from natural movies.
We show robustness to noise in the input image can produce representational straightening in feedforward neural networks.
arXiv Detail & Related papers (2023-08-26T13:04:36Z) - Neural Foundations of Mental Simulation: Future Prediction of Latent
Representations on Dynamic Scenes [3.2744507958793143]
We combine a goal-driven modeling approach with dense neurophysiological data and human behavioral readouts to impinge on this question.
Specifically, we construct and evaluate several classes of sensory-cognitive networks to predict the future state of rich, ethologically-relevant environments.
We find strong differentiation across these model classes in their ability to predict neural and behavioral data both within and across diverse environments.
arXiv Detail & Related papers (2023-05-19T15:56:06Z) - Adapting Brain-Like Neural Networks for Modeling Cortical Visual
Prostheses [68.96380145211093]
Cortical prostheses are devices implanted in the visual cortex that attempt to restore lost vision by electrically stimulating neurons.
Currently, the vision provided by these devices is limited, and accurately predicting the visual percepts resulting from stimulation is an open challenge.
We propose to address this challenge by utilizing 'brain-like' convolutional neural networks (CNNs), which have emerged as promising models of the visual system.
arXiv Detail & Related papers (2022-09-27T17:33:19Z) - Adversarially trained neural representations may already be as robust as
corresponding biological neural representations [66.73634912993006]
We develop a method for performing adversarial visual attacks directly on primate brain activity.
We report that the biological neurons that make up visual systems of primates exhibit susceptibility to adversarial perturbations that is comparable in magnitude to existing (robustly trained) artificial neural networks.
arXiv Detail & Related papers (2022-06-19T04:15:29Z) - Peripheral Vision Transformer [52.55309200601883]
We take a biologically inspired approach and explore to model peripheral vision in deep neural networks for visual recognition.
We propose to incorporate peripheral position encoding to the multi-head self-attention layers to let the network learn to partition the visual field into diverse peripheral regions given training data.
We evaluate the proposed network, dubbed PerViT, on the large-scale ImageNet dataset and systematically investigate the inner workings of the model for machine perception.
arXiv Detail & Related papers (2022-06-14T12:47:47Z) - Reinforcement Learning with Action-Free Pre-Training from Videos [95.25074614579646]
We introduce a framework that learns representations useful for understanding the dynamics via generative pre-training on videos.
Our framework significantly improves both final performances and sample-efficiency of vision-based reinforcement learning.
arXiv Detail & Related papers (2022-03-25T19:44:09Z) - Learning Personal Representations from fMRIby Predicting Neurofeedback
Performance [52.77024349608834]
We present a deep neural network method for learning a personal representation for individuals performing a self neuromodulation task, guided by functional MRI (fMRI)
The representation is learned by a self-supervised recurrent neural network, that predicts the Amygdala activity in the next fMRI frame given recent fMRI frames and is conditioned on the learned individual representation.
arXiv Detail & Related papers (2021-12-06T10:16:54Z) - Deep Reinforcement Learning Models Predict Visual Responses in the
Brain: A Preliminary Result [1.0323063834827415]
We use reinforcement learning to train neural network models to play a 3D computer game.
We find that these reinforcement learning models achieve neural response prediction accuracy scores in the early visual areas.
In contrast, the supervised neural network models yield better neural response predictions in the higher visual areas.
arXiv Detail & Related papers (2021-06-18T13:10:06Z) - An evolutionary perspective on the design of neuromorphic shape filters [0.0]
Cortical systems may be providing advanced image processing, but most likely are using design principles that had been proven effective in simpler systems.
The present article provides a brief overview of retinal and cortical mechanisms for registering shape information.
arXiv Detail & Related papers (2020-08-30T17:53:44Z) - Retinopathy of Prematurity Stage Diagnosis Using Object Segmentation and
Convolutional Neural Networks [68.96150598294072]
Retinopathy of Prematurity (ROP) is an eye disorder primarily affecting premature infants with lower weights.
It causes proliferation of vessels in the retina and could result in vision loss and, eventually, retinal detachment, leading to blindness.
In recent years, there has been a significant effort to automate the diagnosis using deep learning.
This paper builds upon the success of previous models and develops a novel architecture, which combines object segmentation and convolutional neural networks (CNN)
Our proposed system first trains an object segmentation model to identify the demarcation line at a pixel level and adds the resulting mask as an additional "color" channel in
arXiv Detail & Related papers (2020-04-03T14:07:41Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.