From Worms to Mice: Homeostasis Maybe All You Need
- URL: http://arxiv.org/abs/2412.20090v1
- Date: Sat, 28 Dec 2024 09:17:09 GMT
- Title: From Worms to Mice: Homeostasis Maybe All You Need
- Authors: Jesus Marco de Lucas,
- Abstract summary: We propose that a simple neural XOR motif may provide the basis for a relevant mode of plasticity in neural circuits of living organisms.
This XOR motif simply signals the discrepancy between incoming signals and reference signals.
We examined the prevalence of the XOR motif in the published connectomes of various organisms with increasing complexity.
- Score: 0.0
- License:
- Abstract: In this brief and speculative commentary, we explore ideas inspired by neural networks in machine learning, proposing that a simple neural XOR motif, involving both excitatory and inhibitory connections, may provide the basis for a relevant mode of plasticity in neural circuits of living organisms, with homeostasis as the sole guiding principle. This XOR motif simply signals the discrepancy between incoming signals and reference signals, thereby providing a basis for a loss function in learning neural circuits, and at the same time regulating homeostasis by halting the propagation of these incoming signals. The core motif uses a 4:1 ratio of excitatory to inhibitory neurons, and supports broader neural patterns such as the well-known 'winner takes all' (WTA) mechanism. We examined the prevalence of the XOR motif in the published connectomes of various organisms with increasing complexity, and found that it ranges from tens (in C. elegans) to millions (in several Drosophila neuropils) and more than tens of millions (in mouse V1 visual cortex). If validated, our hypothesis identifies two of the three key components in analogy to machine learning models: the architecture and the loss function. And we propose that a relevant type of biological neural plasticity is simply driven by a basic control or regulatory system, which has persisted and adapted despite the increasing complexity of organisms throughout evolution.
Related papers
- Artificial Kuramoto Oscillatory Neurons [65.16453738828672]
It has long been known in both neuroscience and AI that ''binding'' between neurons leads to a form of competitive learning.
We introduce Artificial rethinking together with arbitrary connectivity designs such as fully connected convolutional, or attentive mechanisms.
We show that this idea provides performance improvements across a wide spectrum of tasks such as unsupervised object discovery, adversarial robustness, uncertainty, and reasoning.
arXiv Detail & Related papers (2024-10-17T17:47:54Z) - Contrastive Learning in Memristor-based Neuromorphic Systems [55.11642177631929]
Spiking neural networks have become an important family of neuron-based models that sidestep many of the key limitations facing modern-day backpropagation-trained deep networks.
In this work, we design and investigate a proof-of-concept instantiation of contrastive-signal-dependent plasticity (CSDP), a neuromorphic form of forward-forward-based, backpropagation-free learning.
arXiv Detail & Related papers (2024-09-17T04:48:45Z) - Implementing engrams from a machine learning perspective: XOR as a basic motif [0.0]
We present our initial ideas based on a basic motif that implements an XOR switch.
We explore how to build a basic biological neuronal structure with learning capacity integrating this XOR motif.
arXiv Detail & Related papers (2024-06-14T11:36:49Z) - Contribute to balance, wire in accordance: Emergence of backpropagation from a simple, bio-plausible neuroplasticity rule [0.0]
We introduce a novel neuroplasticity rule that offers a potential mechanism for implementing BP in the brain.
We demonstrate mathematically that our learning rule precisely replicates BP in layered neural networks without any approximations.
arXiv Detail & Related papers (2024-05-23T03:28:52Z) - Brain-Inspired Machine Intelligence: A Survey of
Neurobiologically-Plausible Credit Assignment [65.268245109828]
We examine algorithms for conducting credit assignment in artificial neural networks that are inspired or motivated by neurobiology.
We organize the ever-growing set of brain-inspired learning schemes into six general families and consider these in the context of backpropagation of errors.
The results of this review are meant to encourage future developments in neuro-mimetic systems and their constituent learning processes.
arXiv Detail & Related papers (2023-12-01T05:20:57Z) - Learning with Chemical versus Electrical Synapses -- Does it Make a
Difference? [61.85704286298537]
Bio-inspired neural networks have the potential to advance our understanding of neural computation and improve the state-of-the-art of AI systems.
We conduct experiments with autonomous lane-keeping through a photorealistic autonomous driving simulator to evaluate their performance under diverse conditions.
arXiv Detail & Related papers (2023-11-21T13:07:20Z) - Two-compartment neuronal spiking model expressing brain-state specific apical-amplification, -isolation and -drive regimes [0.7255608805275865]
Brain-state-specific neural mechanisms play a crucial role in integrating past and contextual knowledge with the current, incoming flow of evidence.
This work aims to provide a two-compartment spiking neuron model that incorporates features essential for supporting brain-state-specific learning.
arXiv Detail & Related papers (2023-11-10T14:16:46Z) - Contrastive-Signal-Dependent Plasticity: Self-Supervised Learning in Spiking Neural Circuits [61.94533459151743]
This work addresses the challenge of designing neurobiologically-motivated schemes for adjusting the synapses of spiking networks.
Our experimental simulations demonstrate a consistent advantage over other biologically-plausible approaches when training recurrent spiking networks.
arXiv Detail & Related papers (2023-03-30T02:40:28Z) - Evolving spiking neuron cellular automata and networks to emulate in
vitro neuronal activity [0.0]
We produce spiking neural systems that emulate the patterns of behavior of biological neurons in vitro.
Our models were able to produce a level of network-wide synchrony.
The genomes of the top-performing models indicate the excitability and density of connections in the model play an important role in determining the complexity of the produced activity.
arXiv Detail & Related papers (2021-10-15T17:55:04Z) - The distribution of inhibitory neurons in the C. elegans connectome
facilitates self-optimization of coordinated neural activity [78.15296214629433]
The nervous system of the nematode Caenorhabditis elegans exhibits remarkable complexity despite the worm's small size.
A general challenge is to better understand the relationship between neural organization and neural activity at the system level.
We implemented an abstract simulation model of the C. elegans connectome that approximates the neurotransmitter identity of each neuron.
arXiv Detail & Related papers (2020-10-28T23:11:37Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.