Bio-Inspired Neuron Synapse Optimization for Adaptive Learning and Smart Decision-Making
- URL: http://arxiv.org/abs/2511.00042v1
- Date: Tue, 28 Oct 2025 03:58:11 GMT
- Title: Bio-Inspired Neuron Synapse Optimization for Adaptive Learning and Smart Decision-Making
- Authors: Sreeja Singh, Tamal Ghosh,
- Abstract summary: The paper introduces Neuron Optimization (NSO), a new metaheuristic algorithm inspired by neural interactions.<n>The algorithm was benchmarked against popular metaheuristics and the recently published Hippopotamus Optimization Algorithm (HOA)<n> Benchmark results reveal that NSO consistently outperforms other major algorithms in terms of convergence speed, robustness, and scalability.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Purpose: Optimization challenges in science, engineering, and real-world applications often involve complex, high-dimensional, and multimodal search spaces. Traditional optimization methods frequently struggle with local optima entrapment, slow convergence, and inefficiency in large-scale environments. This study aims to address these limitations by proposing a novel optimization algorithm inspired by neural mechanisms. Design/methodology/approach: The paper introduces Neuron Synapse Optimization (NSO), a new metaheuristic algorithm inspired by neural interactions. NSO features key innovations such as fitness-based synaptic weight updates to improve search influence, adaptive pruning to minimize computational overhead, and dual guidance from global and local best solutions to balance exploration and exploitation. The algorithm was benchmarked against popular metaheuristics and the recently published Hippopotamus Optimization Algorithm (HOA) using the CEC 2014 test suite, encompassing unimodal, multimodal, and composition function landscapes. Findings: Benchmark results reveal that NSO consistently outperforms HOA and other major algorithms in terms of convergence speed, robustness, and scalability. NSO demonstrates superior adaptability and efficiency, particularly in complex, high-dimensional search spaces. Originality: NSO introduces a unique blend of neural-inspired mechanisms with dynamic resource allocation, setting it apart from existing algorithms. Its innovative design enhances search performance while reducing computational cost. With promising applications in technology, healthcare, data science, and engineering, NSO paves the way for future research into dynamic and multi-objective optimization, machine learning hyperparameter tuning, and real-world engineering design problems.
Related papers
- Optimal Control Theoretic Neural Optimizer: From Backpropagation to Dynamic Programming [29.911180907218053]
This paper focuses on an algorithmic perspective on optimization of deep neural networks (DNNs)<n>Our motivated observation is the striking algorithmic resemblance between the Backpropagation algorithm for computing gradients in DNNs and the optimality conditions for dynamical systems.<n>The resulting physics, termed Optimal Control The Neural-theoretic (OCNOpt) enables rich algorithmic opportunities.
arXiv Detail & Related papers (2025-10-15T23:39:51Z) - SO-PIFRNN: Self-optimization physics-informed Fourier-features randomized neural network for solving partial differential equations [3.769992289689535]
This study proposes a self-optimization physics-informed Fourier-features randomized neural network (SO-PIFRNN) framework.<n>The inner-level optimization determines the output layer weights of the neural network via the least squares method.<n>The experimental results affirm that SO-PIFRNN exhibits superior approximation accuracy and frequency capture capability.
arXiv Detail & Related papers (2025-08-07T02:08:34Z) - Edge-Cloud Collaborative Computing on Distributed Intelligence and Model Optimization: A Survey [58.50944604905037]
Edge-cloud collaborative computing (ECCC) has emerged as a pivotal paradigm for addressing the computational demands of modern intelligent applications.<n>Recent advancements in AI, particularly deep learning and large language models (LLMs), have dramatically enhanced the capabilities of these distributed systems.<n>This survey provides a structured tutorial on fundamental architectures, enabling technologies, and emerging applications.
arXiv Detail & Related papers (2025-05-03T13:55:38Z) - A Survey on Inference Optimization Techniques for Mixture of Experts Models [50.40325411764262]
Large-scale Mixture of Experts (MoE) models offer enhanced model capacity and computational efficiency through conditional computation.<n> deploying and running inference on these models presents significant challenges in computational resources, latency, and energy efficiency.<n>This survey analyzes optimization techniques for MoE models across the entire system stack.
arXiv Detail & Related papers (2024-12-18T14:11:15Z) - Enhancing CNN Classification with Lamarckian Memetic Algorithms and Local Search [0.0]
We propose a novel approach integrating a two-stage training technique with population-based optimization algorithms incorporating local search capabilities.
Our experiments demonstrate that the proposed method outperforms state-of-the-art gradient-based techniques.
arXiv Detail & Related papers (2024-10-26T17:31:15Z) - Recent Advances in Scalable Energy-Efficient and Trustworthy Spiking
Neural networks: from Algorithms to Technology [11.479629320025673]
spiking neural networks (SNNs) have become an attractive alternative to deep neural networks for a broad range of signal processing applications.
We describe advances in algorithmic and optimization innovations to efficiently train and scale low-latency, and energy-efficient SNNs.
We discuss the potential path forward for research in building deployable SNN systems.
arXiv Detail & Related papers (2023-12-02T19:47:00Z) - A Hybrid Neural Coding Approach for Pattern Recognition with Spiking
Neural Networks [53.31941519245432]
Brain-inspired spiking neural networks (SNNs) have demonstrated promising capabilities in solving pattern recognition tasks.
These SNNs are grounded on homogeneous neurons that utilize a uniform neural coding for information representation.
In this study, we argue that SNN architectures should be holistically designed to incorporate heterogeneous coding schemes.
arXiv Detail & Related papers (2023-05-26T02:52:12Z) - Approaching Globally Optimal Energy Efficiency in Interference Networks
via Machine Learning [22.926877147296594]
This work presents a machine learning approach to optimize the energy efficiency (EE) in a multi-cell wireless network.
Results show that the method achieves an EE close to the optimum by the branch-and- computation testing.
arXiv Detail & Related papers (2022-11-25T08:36:34Z) - Improved Algorithms for Neural Active Learning [74.89097665112621]
We improve the theoretical and empirical performance of neural-network(NN)-based active learning algorithms for the non-parametric streaming setting.
We introduce two regret metrics by minimizing the population loss that are more suitable in active learning than the one used in state-of-the-art (SOTA) related work.
arXiv Detail & Related papers (2022-10-02T05:03:38Z) - Auto-PINN: Understanding and Optimizing Physics-Informed Neural
Architecture [77.59766598165551]
Physics-informed neural networks (PINNs) are revolutionizing science and engineering practice by bringing together the power of deep learning to bear on scientific computation.
Here, we propose Auto-PINN, which employs Neural Architecture Search (NAS) techniques to PINN design.
A comprehensive set of pre-experiments using standard PDE benchmarks allows us to probe the structure-performance relationship in PINNs.
arXiv Detail & Related papers (2022-05-27T03:24:31Z) - Neural Combinatorial Optimization: a New Player in the Field [69.23334811890919]
This paper presents a critical analysis on the incorporation of algorithms based on neural networks into the classical optimization framework.
A comprehensive study is carried out to analyse the fundamental aspects of such algorithms, including performance, transferability, computational cost and to larger-sized instances.
arXiv Detail & Related papers (2022-05-03T07:54:56Z) - Self-Directed Online Machine Learning for Topology Optimization [58.920693413667216]
Self-directed Online Learning Optimization integrates Deep Neural Network (DNN) with Finite Element Method (FEM) calculations.
Our algorithm was tested by four types of problems including compliance minimization, fluid-structure optimization, heat transfer enhancement and truss optimization.
It reduced the computational time by 2 5 orders of magnitude compared with directly using methods, and outperformed all state-of-the-art algorithms tested in our experiments.
arXiv Detail & Related papers (2020-02-04T20:00:28Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.