Reinforcement learning in densely recurrent biological networks
- URL: http://arxiv.org/abs/2508.09618v1
- Date: Wed, 13 Aug 2025 08:49:59 GMT
- Title: Reinforcement learning in densely recurrent biological networks
- Authors: Miles Walter Churchland, Jordi Garcia-Ojalvo,
- Abstract summary: We introduce a hybrid, derivative-free framework that implements reinforcement learning by coupling global evolutionary exploration with local direct search exploitation.<n>The method, termed ENOMAD, is benchmarked on a suite of food-foraging tasks.<n>Two algorithmic variants of the method are introduced, which lead to either small distributed adjustments of many weights, or larger changes on a limited number of weights.
- Score: 0.0
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Training highly recurrent networks in continuous action spaces is a technical challenge: gradient-based methods suffer from exploding or vanishing gradients, while purely evolutionary searches converge slowly in high-dimensional weight spaces. We introduce a hybrid, derivative-free optimization framework that implements reinforcement learning by coupling global evolutionary exploration with local direct search exploitation. The method, termed ENOMAD (Evolutionary Nonlinear Optimization with Mesh Adaptive Direct search), is benchmarked on a suite of food-foraging tasks instantiated in the fully mapped neural connectome of the nematode \emph{Caenorhabditis elegans}. Crucially, ENOMAD leverages biologically derived weight priors, letting it refine--rather than rebuild--the organism's native circuitry. Two algorithmic variants of the method are introduced, which lead to either small distributed adjustments of many weights, or larger changes on a limited number of weights. Both variants significantly exceed the performance of the untrained connectome (in what can be interpreted as an example of transfer learning) and of existing training strategies. These findings demonstrate that integrating evolutionary search with nonlinear optimization provides an efficient, biologically grounded strategy for specializing natural recurrent networks towards a specified set of tasks.
Related papers
- Renormalization Group Guided Tensor Network Structure Search [58.0378300612202]
Network structure search (TN-SS) aims to automatically discover optimal network topologies and rank robustness for efficient tensor decomposition in high-dimensional data representation.<n>We propose RGTN (Renormalization Group guided Network search), a physics-inspired framework transforming TN-SS via multi-scale renormalization group flows.
arXiv Detail & Related papers (2025-12-31T06:31:43Z) - Towards Guided Descent: Optimization Algorithms for Training Neural Networks At Scale [0.0]
This thesis investigates the evolution of optimization algorithms from classical first-order methods to modern principled higher-order techniques.<n>The analysis uncovers the limitations of these conventional approaches when confronted with anisotropy that is representative of real-world data.<n>Next, the interplay between these optimization algorithms and the broader neural network training toolkit emerges as equally essential to empirical success.
arXiv Detail & Related papers (2025-12-20T14:20:46Z) - Adaptive Class Emergence Training: Enhancing Neural Network Stability and Generalization through Progressive Target Evolution [0.0]
We propose a novel training methodology for neural networks in classification problems.
We evolve the target outputs from a null vector to one-hot encoded vectors throughout the training process.
This gradual transition allows the network to adapt more smoothly to the increasing complexity of the classification task.
arXiv Detail & Related papers (2024-09-04T03:25:48Z) - Hallmarks of Optimization Trajectories in Neural Networks: Directional Exploration and Redundancy [75.15685966213832]
We analyze the rich directional structure of optimization trajectories represented by their pointwise parameters.
We show that training only scalar batchnorm parameters some while into training matches the performance of training the entire network.
arXiv Detail & Related papers (2024-03-12T07:32:47Z) - Lottery Tickets in Evolutionary Optimization: On Sparse
Backpropagation-Free Trainability [0.0]
We study gradient descent (GD)-based sparse training and evolution strategies (ES)
We find that ES explore diverse and flat local optima and do not preserve linear mode connectivity across sparsity levels and independent runs.
arXiv Detail & Related papers (2023-05-31T15:58:54Z) - Globally Optimal Training of Neural Networks with Threshold Activation
Functions [63.03759813952481]
We study weight decay regularized training problems of deep neural networks with threshold activations.
We derive a simplified convex optimization formulation when the dataset can be shattered at a certain layer of the network.
arXiv Detail & Related papers (2023-03-06T18:59:13Z) - Direct Mutation and Crossover in Genetic Algorithms Applied to
Reinforcement Learning Tasks [0.9137554315375919]
This paper will focus on applying neuroevolution using a simple genetic algorithm (GA) to find the weights of a neural network that produce optimally behaving agents.
We present two novel modifications that improve the data efficiency and speed of convergence when compared to the initial implementation.
arXiv Detail & Related papers (2022-01-13T07:19:28Z) - Non-Gradient Manifold Neural Network [79.44066256794187]
Deep neural network (DNN) generally takes thousands of iterations to optimize via gradient descent.
We propose a novel manifold neural network based on non-gradient optimization.
arXiv Detail & Related papers (2021-06-15T06:39:13Z) - Epigenetic evolution of deep convolutional models [81.21462458089142]
We build upon a previously proposed neuroevolution framework to evolve deep convolutional models.
We propose a convolutional layer layout which allows kernels of different shapes and sizes to coexist within the same layer.
The proposed layout enables the size and shape of individual kernels within a convolutional layer to be evolved with a corresponding new mutation operator.
arXiv Detail & Related papers (2021-04-12T12:45:16Z) - An Ode to an ODE [78.97367880223254]
We present a new paradigm for Neural ODE algorithms, called ODEtoODE, where time-dependent parameters of the main flow evolve according to a matrix flow on the group O(d)
This nested system of two flows provides stability and effectiveness of training and provably solves the gradient vanishing-explosion problem.
arXiv Detail & Related papers (2020-06-19T22:05:19Z) - A Hybrid Method for Training Convolutional Neural Networks [3.172761915061083]
We propose a hybrid method that uses both backpropagation and evolutionary strategies to train Convolutional Neural Networks.
We show that the proposed hybrid method is capable of improving upon regular training in the task of image classification.
arXiv Detail & Related papers (2020-04-15T17:52:48Z) - Dynamic Hierarchical Mimicking Towards Consistent Optimization
Objectives [73.15276998621582]
We propose a generic feature learning mechanism to advance CNN training with enhanced generalization ability.
Partially inspired by DSN, we fork delicately designed side branches from the intermediate layers of a given neural network.
Experiments on both category and instance recognition tasks demonstrate the substantial improvements of our proposed method.
arXiv Detail & Related papers (2020-03-24T09:56:13Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.