Neural Circuit Architectural Priors for Embodied Control
- URL: http://arxiv.org/abs/2201.05242v1
- Date: Thu, 13 Jan 2022 23:22:16 GMT
- Title: Neural Circuit Architectural Priors for Embodied Control
- Authors: Nikhil X. Bhattasali, Anthony M. Zador, Tatiana A. Engel
- Abstract summary: In nature, animals are born with highly structured connectivity in their nervous systems shaped by evolution.
In this work, we ask what advantages biologically inspired network architecture can provide in the context of motor control.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Artificial neural networks for simulated motor control and robotics often
adopt generic architectures like fully connected MLPs. While general, these
tabula rasa architectures rely on large amounts of experience to learn, are not
easily transferable to new bodies, and have internal dynamics that are
difficult to interpret. In nature, animals are born with highly structured
connectivity in their nervous systems shaped by evolution; this innate
circuitry acts synergistically with learning mechanisms to provide inductive
biases that enable most animals to function well soon after birth and improve
abilities efficiently. Convolutional networks inspired by visual circuitry have
encoded useful biases for vision. However, it is unknown the extent to which
ANN architectures inspired by neural circuitry can yield useful biases for
other domains. In this work, we ask what advantages biologically inspired
network architecture can provide in the context of motor control. Specifically,
we translate C. elegans circuits for locomotion into an ANN model controlling a
simulated Swimmer agent. On a locomotion task, our architecture achieves good
initial performance and asymptotic performance comparable with MLPs, while
dramatically improving data efficiency and requiring orders of magnitude fewer
parameters. Our architecture is more interpretable and transfers to new body
designs. An ablation analysis shows that principled excitation/inhibition is
crucial for learning, while weight initialization contributes to good initial
performance. Our work demonstrates several advantages of ANN architectures
inspired by systems neuroscience and suggests a path towards modeling more
complex behavior.
Related papers
- Neural Circuit Architectural Priors for Quadruped Locomotion [18.992630001752136]
In nature, animals are born with priors in the form of their nervous system's architecture.
This work shows that neural circuits can provide valuable architectural priors for locomotion.
arXiv Detail & Related papers (2024-10-09T17:59:45Z) - Single Neuromorphic Memristor closely Emulates Multiple Synaptic
Mechanisms for Energy Efficient Neural Networks [71.79257685917058]
We demonstrate memristive nano-devices based on SrTiO3 that inherently emulate all these synaptic functions.
These memristors operate in a non-filamentary, low conductance regime, which enables stable and energy efficient operation.
arXiv Detail & Related papers (2024-02-26T15:01:54Z) - Learning with Chemical versus Electrical Synapses -- Does it Make a
Difference? [61.85704286298537]
Bio-inspired neural networks have the potential to advance our understanding of neural computation and improve the state-of-the-art of AI systems.
We conduct experiments with autonomous lane-keeping through a photorealistic autonomous driving simulator to evaluate their performance under diverse conditions.
arXiv Detail & Related papers (2023-11-21T13:07:20Z) - Brain-inspired Evolutionary Architectures for Spiking Neural Networks [6.607406750195899]
We explore efficient architectural optimization for Spiking Neural Networks (SNNs)
This paper evolves SNNs architecture by incorporating brain-inspired local modular structure and global cross- module connectivity.
We introduce an efficient multi-objective evolutionary algorithm based on a few-shot performance predictor, endowing SNNs with high performance, efficiency and low energy consumption.
arXiv Detail & Related papers (2023-09-11T06:39:11Z) - Contrastive-Signal-Dependent Plasticity: Self-Supervised Learning in Spiking Neural Circuits [61.94533459151743]
This work addresses the challenge of designing neurobiologically-motivated schemes for adjusting the synapses of spiking networks.
Our experimental simulations demonstrate a consistent advantage over other biologically-plausible approaches when training recurrent spiking networks.
arXiv Detail & Related papers (2023-03-30T02:40:28Z) - Biological connectomes as a representation for the architecture of
artificial neural networks [0.0]
We translate the motor circuit of the C. Elegans nematode into artificial neural networks at varying levels of biophysical realism.
We show that while the C. Elegans locomotion circuit provides a powerful inductive bias on locomotion problems, its structure may hinder performance on tasks unrelated to locomotion.
arXiv Detail & Related papers (2022-09-28T20:25:26Z) - Improving Sample Efficiency of Value Based Models Using Attention and
Vision Transformers [52.30336730712544]
We introduce a deep reinforcement learning architecture whose purpose is to increase sample efficiency without sacrificing performance.
We propose a visually attentive model that uses transformers to learn a self-attention mechanism on the feature maps of the state representation.
We demonstrate empirically that this architecture improves sample complexity for several Atari environments, while also achieving better performance in some of the games.
arXiv Detail & Related papers (2022-02-01T19:03:03Z) - A neural net architecture based on principles of neural plasticity and
development evolves to effectively catch prey in a simulated environment [2.834895018689047]
A profound challenge for A-Life is to construct agents whose behavior is 'life-like' in a deep way.
We propose an architecture and approach to constructing networks driving artificial agents, using processes analogous to the processes that construct and sculpt the brains of animals.
We think this architecture may be useful for controlling small autonomous robots or drones, because it allows for a rapid response to changes in sensor inputs.
arXiv Detail & Related papers (2022-01-28T05:10:56Z) - Dynamic Inference with Neural Interpreters [72.90231306252007]
We present Neural Interpreters, an architecture that factorizes inference in a self-attention network as a system of modules.
inputs to the model are routed through a sequence of functions in a way that is end-to-end learned.
We show that Neural Interpreters perform on par with the vision transformer using fewer parameters, while being transferrable to a new task in a sample efficient manner.
arXiv Detail & Related papers (2021-10-12T23:22:45Z) - Efficient Neural Architecture Search with Performance Prediction [0.0]
We use a neural architecture search to find the best network architecture for the task at hand.
Existing NAS algorithms generally evaluate the fitness of a new architecture by fully training from scratch.
An end-to-end offline performance predictor is proposed to accelerate the evaluation of sampled architectures.
arXiv Detail & Related papers (2021-08-04T05:44:16Z) - A Semi-Supervised Assessor of Neural Architectures [157.76189339451565]
We employ an auto-encoder to discover meaningful representations of neural architectures.
A graph convolutional neural network is introduced to predict the performance of architectures.
arXiv Detail & Related papers (2020-05-14T09:02:33Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.