Neural Circuit Architectural Priors for Quadruped Locomotion
- URL: http://arxiv.org/abs/2410.07174v1
- Date: Wed, 9 Oct 2024 17:59:45 GMT
- Title: Neural Circuit Architectural Priors for Quadruped Locomotion
- Authors: Nikhil X. Bhattasali, Venkatesh Pattabiraman, Lerrel Pinto, Grace W. Lindsay,
- Abstract summary: In nature, animals are born with priors in the form of their nervous system's architecture.
This work shows that neural circuits can provide valuable architectural priors for locomotion.
- Score: 18.992630001752136
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Learning-based approaches to quadruped locomotion commonly adopt generic policy architectures like fully connected MLPs. As such architectures contain few inductive biases, it is common in practice to incorporate priors in the form of rewards, training curricula, imitation data, or trajectory generators. In nature, animals are born with priors in the form of their nervous system's architecture, which has been shaped by evolution to confer innate ability and efficient learning. For instance, a horse can walk within hours of birth and can quickly improve with practice. Such architectural priors can also be useful in ANN architectures for AI. In this work, we explore the advantages of a biologically inspired ANN architecture for quadruped locomotion based on neural circuits in the limbs and spinal cord of mammals. Our architecture achieves good initial performance and comparable final performance to MLPs, while using less data and orders of magnitude fewer parameters. Our architecture also exhibits better generalization to task variations, even admitting deployment on a physical robot without standard sim-to-real methods. This work shows that neural circuits can provide valuable architectural priors for locomotion and encourages future work in other sensorimotor skills.
Related papers
- Training Neural Networks with Internal State, Unconstrained
Connectivity, and Discrete Activations [66.53734987585244]
True intelligence may require the ability of a machine learning model to manage internal state.
We show that we have not yet discovered the most effective algorithms for training such models.
We present one attempt to design such a training algorithm, applied to an architecture with binary activations and only a single matrix of weights.
arXiv Detail & Related papers (2023-12-22T01:19:08Z) - CQural: A Novel CNN based Hybrid Architecture for Quantum Continual
Machine Learning [0.0]
We show that it is possible to circumvent catastrophic forgetting in continual learning with novel hybrid classical-quantum neural networks.
We also claim that if the model is trained with these explanations, it tends to give better performance and learn specific features that are far from the decision boundary.
arXiv Detail & Related papers (2023-05-16T18:19:12Z) - Equivariant Architectures for Learning in Deep Weight Spaces [54.61765488960555]
We present a novel network architecture for learning in deep weight spaces.
It takes as input a concatenation of weights and biases of a pre-trainedvariant.
We show how these layers can be implemented using three basic operations.
arXiv Detail & Related papers (2023-01-30T10:50:33Z) - Neuromorphic Artificial Intelligence Systems [58.1806704582023]
Modern AI systems, based on von Neumann architecture and classical neural networks, have a number of fundamental limitations in comparison with the brain.
This article discusses such limitations and the ways they can be mitigated.
It presents an overview of currently available neuromorphic AI projects in which these limitations are overcome.
arXiv Detail & Related papers (2022-05-25T20:16:05Z) - CogNGen: Constructing the Kernel of a Hyperdimensional Predictive
Processing Cognitive Architecture [79.07468367923619]
We present a new cognitive architecture that combines two neurobiologically plausible, computational models.
We aim to develop a cognitive architecture that has the power of modern machine learning techniques.
arXiv Detail & Related papers (2022-03-31T04:44:28Z) - Neural Circuit Architectural Priors for Embodied Control [0.0]
In nature, animals are born with highly structured connectivity in their nervous systems shaped by evolution.
In this work, we ask what advantages biologically inspired network architecture can provide in the context of motor control.
arXiv Detail & Related papers (2022-01-13T23:22:16Z) - Conceptual Expansion Neural Architecture Search (CENAS) [1.3464152928754485]
We present an approach called Conceptual Expansion Neural Architecture Search (CENAS)
It combines a sample-efficient, computational creativity-inspired transfer learning approach with neural architecture search.
It finds models faster than naive architecture search via transferring existing weights to approximate the parameters of the new model.
arXiv Detail & Related papers (2021-10-07T02:29:26Z) - Learning Neuro-Symbolic Relational Transition Models for Bilevel
Planning [61.37385221479233]
In this work, we take a step toward bridging the gap between model-based reinforcement learning and integrated symbolic-geometric robotic planning.
NSRTs have both symbolic and neural components, enabling a bilevel planning scheme where symbolic AI planning in an outer loop guides continuous planning with neural models in an inner loop.
NSRTs can be learned after only tens or hundreds of training episodes, and then used for fast planning in new tasks that require up to 60 actions to reach the goal.
arXiv Detail & Related papers (2021-05-28T19:37:18Z) - Self-Constructing Neural Networks Through Random Mutation [0.0]
This paper presents a simple method for learning neural architecture through random mutation.
It demonstrates 1) neural architecture may be learned during the agent's lifetime, 2) neural architecture may be constructed over a single lifetime without any initial connections or neurons, and 3) architectural modifications enable rapid adaptation to dynamic and novel task scenarios.
arXiv Detail & Related papers (2021-03-29T15:27:38Z) - A Semi-Supervised Assessor of Neural Architectures [157.76189339451565]
We employ an auto-encoder to discover meaningful representations of neural architectures.
A graph convolutional neural network is introduced to predict the performance of architectures.
arXiv Detail & Related papers (2020-05-14T09:02:33Z) - Learning as Reinforcement: Applying Principles of Neuroscience for More
General Reinforcement Learning Agents [1.0742675209112622]
We implement an architecture founded in principles of experimental neuroscience, by combining computationally efficient abstractions of biological algorithms.
Our approach is inspired by research on spike-timing dependent plasticity, the transition between short and long term memory, and the role of various neurotransmitters in rewarding curiosity.
The Neurons-in-a-Box architecture can learn in a wholly generalizable manner, and demonstrates an efficient way to build and apply representations without explicitly optimizing over a set of criteria or actions.
arXiv Detail & Related papers (2020-04-20T04:06:21Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.