Lilith: Developmental Modular LLMs with Chemical Signaling
- URL: http://arxiv.org/abs/2507.04575v1
- Date: Sun, 06 Jul 2025 23:18:51 GMT
- Title: Lilith: Developmental Modular LLMs with Chemical Signaling
- Authors: Mohid Farooqi, Alejandro Comas-Leon,
- Abstract summary: Current paradigms in Artificial Intelligence rely on layers of feedforward networks which model brain activity at the neuronal level.<n>We propose LILITH, a novel architecture that combines developmental training of modular language models with brain-inspired token-based communication protocols.
- Score: 49.1574468325115
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Current paradigms in Artificial Intelligence rely on layers of feedforward networks which model brain activity at the neuronal level. We conjecture that expanding to the level of multiple brain regions with chemical signaling may be a productive step toward understanding the emergence of consciousness. We propose LILITH, a novel architecture that combines developmental training of modular language models with brain-inspired token-based communication protocols, mirroring chemical signaling in the brain. Our approach models distinct brain regions as specialized LLM modules including thinking, memory, sensory, and regulatory components that communicate through emergent token-based signaling protocols analogous to neurotransmitter networks. Unlike traditional pre-trained systems, LILITH would employ developmental training where untrained LLM architectures learn through simulated life experiences, developing communication pathways and cognitive abilities through environmental interaction and evolutionary optimization. This framework would enable direct empirical investigation of consciousness emergence using Integrated Information Theory metrics while providing unprecedented insight into inter-module signaling patterns during development. By optimizing for consciousness emergence rather than task performance, LILITH could provide insight into different emergent phenomena at multiple levels of neural correlates, contrasting neuronal-level processing with multi-region coordination dynamics. The goal of this paper is to put the idea forward while recognizing the substantial challenges in implementing such a system.
Related papers
- Neural Brain: A Neuroscience-inspired Framework for Embodied Agents [58.58177409853298]
Current AI systems, such as large language models, remain disembodied, unable to physically engage with the world.<n>At the core of this challenge lies the concept of Neural Brain, a central intelligence system designed to drive embodied agents with human-like adaptability.<n>This paper introduces a unified framework for the Neural Brain of embodied agents, addressing two fundamental challenges.
arXiv Detail & Related papers (2025-05-12T15:05:34Z) - Neural Manifolds and Cognitive Consistency: A New Approach to Memory Consolidation in Artificial Systems [0.0]
We introduce a novel mathematical framework that unifies neural population dynamics, hippocampal sharp wave-ripple (SpWR) generation, and cognitive consistency constraints inspired by Heider's theory.<n>Our model leverages low-dimensional manifold representations to capture structured neural drift and incorporates a balance energy function to enforce coherent synaptic interactions.<n>This work paves the way for scalable neuromorphic architectures that bridge neuroscience and artificial intelligence, offering more robust and adaptive learning mechanisms for future intelligent systems.
arXiv Detail & Related papers (2025-02-25T18:28:25Z) - Improving the adaptive and continuous learning capabilities of artificial neural networks: Lessons from multi-neuromodulatory dynamics [43.35924697803789]
Biological organisms excel in acquiring, transferring, and retaining knowledge while adapting to dynamic environments.<n>This study explores how neuromodulation, a fundamental feature of biological learning systems, can help address challenges such as catastrophic forgetting.<n>By integrating multi-scale neuromodulation, we aim to bridge the gap between biological learning and artificial systems.
arXiv Detail & Related papers (2025-01-12T10:10:01Z) - Brain-like Functional Organization within Large Language Models [58.93629121400745]
The human brain has long inspired the pursuit of artificial intelligence (AI)
Recent neuroimaging studies provide compelling evidence of alignment between the computational representation of artificial neural networks (ANNs) and the neural responses of the human brain to stimuli.
In this study, we bridge this gap by directly coupling sub-groups of artificial neurons with functional brain networks (FBNs)
This framework links the AN sub-groups to FBNs, enabling the delineation of brain-like functional organization within large language models (LLMs)
arXiv Detail & Related papers (2024-10-25T13:15:17Z) - Enhancing learning in spiking neural networks through neuronal heterogeneity and neuromodulatory signaling [52.06722364186432]
We propose a biologically-informed framework for enhancing artificial neural networks (ANNs)
Our proposed dual-framework approach highlights the potential of spiking neural networks (SNNs) for emulating diverse spiking behaviors.
We outline how the proposed approach integrates brain-inspired compartmental models and task-driven SNNs, bioinspiration and complexity.
arXiv Detail & Related papers (2024-07-05T14:11:28Z) - MBrain: A Multi-channel Self-Supervised Learning Framework for Brain
Signals [7.682832730967219]
We study the self-supervised learning framework for brain signals that can be applied to pre-train either SEEG or EEG data.
Inspired by this, we propose MBrain to learn implicit spatial and temporal correlations between different channels.
Our model outperforms several state-of-the-art time series SSL and unsupervised models, and has the ability to be deployed to clinical practice.
arXiv Detail & Related papers (2023-06-15T09:14:26Z) - Adaptive structure evolution and biologically plausible synaptic
plasticity for recurrent spiking neural networks [6.760855795263126]
Spiking Neural Network (SNN) based Liquid State Machine (LSM) serves as a suitable architecture to study brain-inspired intelligence.
This paper presents a novel LSM learning model that integrates adaptive structural evolution and multi-scale biological learning rules.
arXiv Detail & Related papers (2023-03-31T07:36:39Z) - Contrastive-Signal-Dependent Plasticity: Self-Supervised Learning in Spiking Neural Circuits [61.94533459151743]
This work addresses the challenge of designing neurobiologically-motivated schemes for adjusting the synapses of spiking networks.
Our experimental simulations demonstrate a consistent advantage over other biologically-plausible approaches when training recurrent spiking networks.
arXiv Detail & Related papers (2023-03-30T02:40:28Z) - Backprop-Free Reinforcement Learning with Active Neural Generative
Coding [84.11376568625353]
We propose a computational framework for learning action-driven generative models without backpropagation of errors (backprop) in dynamic environments.
We develop an intelligent agent that operates even with sparse rewards, drawing inspiration from the cognitive theory of planning as inference.
The robust performance of our agent offers promising evidence that a backprop-free approach for neural inference and learning can drive goal-directed behavior.
arXiv Detail & Related papers (2021-07-10T19:02:27Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.