An evolutionary perspective on the design of neuromorphic shape filters
- URL: http://arxiv.org/abs/2008.13229v1
- Date: Sun, 30 Aug 2020 17:53:44 GMT
- Title: An evolutionary perspective on the design of neuromorphic shape filters
- Authors: Ernest Greene
- Abstract summary: Cortical systems may be providing advanced image processing, but most likely are using design principles that had been proven effective in simpler systems.
The present article provides a brief overview of retinal and cortical mechanisms for registering shape information.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: A substantial amount of time and energy has been invested to develop machine
vision using connectionist (neural network) principles. Most of that work has
been inspired by theories advanced by neuroscientists and behaviorists for how
cortical systems store stimulus information. Those theories call for
information flow through connections among several neuron populations, with the
initial connections being random (or at least non-functional). Then the
strength or location of connections are modified through training trials to
achieve an effective output, such as the ability to identify an object. Those
theories ignored the fact that animals that have no cortex, e.g., fish, can
demonstrate visual skills that outpace the best neural network models. Neural
circuits that allow for immediate effective vision and quick learning have been
preprogrammed by hundreds of millions of years of evolution and the visual
skills are available shortly after hatching. Cortical systems may be providing
advanced image processing, but most likely are using design principles that had
been proven effective in simpler systems. The present article provides a brief
overview of retinal and cortical mechanisms for registering shape information,
with the hope that it might contribute to the design of shape-encoding circuits
that more closely match the mechanisms of biological vision.
Related papers
- Unsupervised representation learning with Hebbian synaptic and structural plasticity in brain-like feedforward neural networks [0.0]
We introduce and evaluate a brain-like neural network model capable of unsupervised representation learning.
The model was tested on a diverse set of popular machine learning benchmarks.
arXiv Detail & Related papers (2024-06-07T08:32:30Z) - Hebbian Learning based Orthogonal Projection for Continual Learning of
Spiking Neural Networks [74.3099028063756]
We develop a new method with neuronal operations based on lateral connections and Hebbian learning.
We show that Hebbian and anti-Hebbian learning on recurrent lateral connections can effectively extract the principal subspace of neural activities.
Our method consistently solves for spiking neural networks with nearly zero forgetting.
arXiv Detail & Related papers (2024-02-19T09:29:37Z) - Brain-Inspired Machine Intelligence: A Survey of
Neurobiologically-Plausible Credit Assignment [65.268245109828]
We examine algorithms for conducting credit assignment in artificial neural networks that are inspired or motivated by neurobiology.
We organize the ever-growing set of brain-inspired learning schemes into six general families and consider these in the context of backpropagation of errors.
The results of this review are meant to encourage future developments in neuro-mimetic systems and their constituent learning processes.
arXiv Detail & Related papers (2023-12-01T05:20:57Z) - A Goal-Driven Approach to Systems Neuroscience [2.6451153531057985]
Humans and animals exhibit a range of interesting behaviors in dynamic environments.
It is unclear how our brains actively reformat this dense sensory information to enable these behaviors.
We offer a new definition of interpretability that we show has promise in yielding unified structural and functional models of neural circuits.
arXiv Detail & Related papers (2023-11-05T16:37:53Z) - Contrastive-Signal-Dependent Plasticity: Self-Supervised Learning in Spiking Neural Circuits [61.94533459151743]
This work addresses the challenge of designing neurobiologically-motivated schemes for adjusting the synapses of spiking networks.
Our experimental simulations demonstrate a consistent advantage over other biologically-plausible approaches when training recurrent spiking networks.
arXiv Detail & Related papers (2023-03-30T02:40:28Z) - Biological connectomes as a representation for the architecture of
artificial neural networks [0.0]
We translate the motor circuit of the C. Elegans nematode into artificial neural networks at varying levels of biophysical realism.
We show that while the C. Elegans locomotion circuit provides a powerful inductive bias on locomotion problems, its structure may hinder performance on tasks unrelated to locomotion.
arXiv Detail & Related papers (2022-09-28T20:25:26Z) - Adapting Brain-Like Neural Networks for Modeling Cortical Visual
Prostheses [68.96380145211093]
Cortical prostheses are devices implanted in the visual cortex that attempt to restore lost vision by electrically stimulating neurons.
Currently, the vision provided by these devices is limited, and accurately predicting the visual percepts resulting from stimulation is an open challenge.
We propose to address this challenge by utilizing 'brain-like' convolutional neural networks (CNNs), which have emerged as promising models of the visual system.
arXiv Detail & Related papers (2022-09-27T17:33:19Z) - Adversarially trained neural representations may already be as robust as
corresponding biological neural representations [66.73634912993006]
We develop a method for performing adversarial visual attacks directly on primate brain activity.
We report that the biological neurons that make up visual systems of primates exhibit susceptibility to adversarial perturbations that is comparable in magnitude to existing (robustly trained) artificial neural networks.
arXiv Detail & Related papers (2022-06-19T04:15:29Z) - Data-driven emergence of convolutional structure in neural networks [83.4920717252233]
We show how fully-connected neural networks solving a discrimination task can learn a convolutional structure directly from their inputs.
By carefully designing data models, we show that the emergence of this pattern is triggered by the non-Gaussian, higher-order local structure of the inputs.
arXiv Detail & Related papers (2022-02-01T17:11:13Z) - Modeling the Evolution of Retina Neural Network [0.0]
retinal circuitry shows many similar structures across a broad array of species.
We design a method using genetic algorithm that leads to architectures whose functions are similar to biological retina.
We discuss how our framework can come into goal-driven search and sustainable enhancement of neural network models in machine learning.
arXiv Detail & Related papers (2020-11-24T23:57:54Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.