Neuromechanical Autoencoders: Learning to Couple Elastic and Neural
Network Nonlinearity
- URL: http://arxiv.org/abs/2302.00032v1
- Date: Tue, 31 Jan 2023 19:04:28 GMT
- Title: Neuromechanical Autoencoders: Learning to Couple Elastic and Neural
Network Nonlinearity
- Authors: Deniz Oktay, Mehran Mirramezani, Eder Medina, Ryan P. Adams
- Abstract summary: We seek to develop machine learning analogs of.
mechanical intelligence.
We jointly learn the morphology of complex nonlinear elastic solids along with a.
deep neural network to control it.
- Score: 15.47367187516723
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Intelligent biological systems are characterized by their embodiment in a
complex environment and the intimate interplay between their nervous systems
and the nonlinear mechanical properties of their bodies. This coordination, in
which the dynamics of the motor system co-evolved to reduce the computational
burden on the brain, is referred to as ``mechanical intelligence'' or
``morphological computation''. In this work, we seek to develop machine
learning analogs of this process, in which we jointly learn the morphology of
complex nonlinear elastic solids along with a deep neural network to control
it. By using a specialized differentiable simulator of elastic mechanics
coupled to conventional deep learning architectures -- which we refer to as
neuromechanical autoencoders -- we are able to learn to perform morphological
computation via gradient descent. Key to our approach is the use of mechanical
metamaterials -- cellular solids, in particular -- as the morphological
substrate. Just as deep neural networks provide flexible and
massively-parametric function approximators for perceptual and control tasks,
cellular solid metamaterials are promising as a rich and learnable space for
approximating a variety of actuation tasks. In this work we take advantage of
these complementary computational concepts to co-design materials and neural
network controls to achieve nonintuitive mechanical behavior. We demonstrate in
simulation how it is possible to achieve translation, rotation, and shape
matching, as well as a ``digital MNIST'' task. We additionally manufacture and
evaluate one of the designs to verify its real-world behavior.
Related papers
- Contrastive Learning in Memristor-based Neuromorphic Systems [55.11642177631929]
Spiking neural networks have become an important family of neuron-based models that sidestep many of the key limitations facing modern-day backpropagation-trained deep networks.
In this work, we design and investigate a proof-of-concept instantiation of contrastive-signal-dependent plasticity (CSDP), a neuromorphic form of forward-forward-based, backpropagation-free learning.
arXiv Detail & Related papers (2024-09-17T04:48:45Z) - Mechanistic Neural Networks for Scientific Machine Learning [58.99592521721158]
We present Mechanistic Neural Networks, a neural network design for machine learning applications in the sciences.
It incorporates a new Mechanistic Block in standard architectures to explicitly learn governing differential equations as representations.
Central to our approach is a novel Relaxed Linear Programming solver (NeuRLP) inspired by a technique that reduces solving linear ODEs to solving linear programs.
arXiv Detail & Related papers (2024-02-20T15:23:24Z) - Brain-Inspired Machine Intelligence: A Survey of
Neurobiologically-Plausible Credit Assignment [65.268245109828]
We examine algorithms for conducting credit assignment in artificial neural networks that are inspired or motivated by neurobiology.
We organize the ever-growing set of brain-inspired learning schemes into six general families and consider these in the context of backpropagation of errors.
The results of this review are meant to encourage future developments in neuro-mimetic systems and their constituent learning processes.
arXiv Detail & Related papers (2023-12-01T05:20:57Z) - Contrastive-Signal-Dependent Plasticity: Self-Supervised Learning in Spiking Neural Circuits [61.94533459151743]
This work addresses the challenge of designing neurobiologically-motivated schemes for adjusting the synapses of spiking networks.
Our experimental simulations demonstrate a consistent advantage over other biologically-plausible approaches when training recurrent spiking networks.
arXiv Detail & Related papers (2023-03-30T02:40:28Z) - Spike-based local synaptic plasticity: A survey of computational models
and neuromorphic circuits [1.8464222520424338]
We review historical, bottom-up, and top-down approaches to modeling synaptic plasticity.
We identify computational primitives that can support low-latency and low-power hardware implementations of spike-based learning rules.
arXiv Detail & Related papers (2022-09-30T15:35:04Z) - Data-driven emergence of convolutional structure in neural networks [83.4920717252233]
We show how fully-connected neural networks solving a discrimination task can learn a convolutional structure directly from their inputs.
By carefully designing data models, we show that the emergence of this pattern is triggered by the non-Gaussian, higher-order local structure of the inputs.
arXiv Detail & Related papers (2022-02-01T17:11:13Z) - Evolving spiking neuron cellular automata and networks to emulate in
vitro neuronal activity [0.0]
We produce spiking neural systems that emulate the patterns of behavior of biological neurons in vitro.
Our models were able to produce a level of network-wide synchrony.
The genomes of the top-performing models indicate the excitability and density of connections in the model play an important role in determining the complexity of the produced activity.
arXiv Detail & Related papers (2021-10-15T17:55:04Z) - Mapping and Validating a Point Neuron Model on Intel's Neuromorphic
Hardware Loihi [77.34726150561087]
We investigate the potential of Intel's fifth generation neuromorphic chip - Loihi'
Loihi is based on the novel idea of Spiking Neural Networks (SNNs) emulating the neurons in the brain.
We find that Loihi replicates classical simulations very efficiently and scales notably well in terms of both time and energy performance as the networks get larger.
arXiv Detail & Related papers (2021-09-22T16:52:51Z) - Neural population geometry: An approach for understanding biological and
artificial neural networks [3.4809730725241605]
We review examples of geometrical approaches providing insight into the function of biological and artificial neural networks.
Neural population geometry has the potential to unify our understanding of structure and function in biological and artificial neural networks.
arXiv Detail & Related papers (2021-04-14T18:10:34Z) - Structural plasticity on an accelerated analog neuromorphic hardware
system [0.46180371154032884]
We present a strategy to achieve structural plasticity by constantly rewiring the pre- and gpostsynaptic partners.
We implemented this algorithm on the analog neuromorphic system BrainScaleS-2.
We evaluated our implementation in a simple supervised learning scenario, showing its ability to optimize the network topology.
arXiv Detail & Related papers (2019-12-27T10:15:58Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.