Colony-Enhanced Recurrent Neural Architecture Search: Collaborative
Ant-Based Optimization
- URL: http://arxiv.org/abs/2401.17480v1
- Date: Tue, 30 Jan 2024 22:27:31 GMT
- Title: Colony-Enhanced Recurrent Neural Architecture Search: Collaborative
Ant-Based Optimization
- Authors: Abdelrahman Elsaid
- Abstract summary: This paper introduces Collaborative Ant-based Neural Topology Search (CANTS-N)
In this innovative approach, ant-inspired agents meticulously construct neural network structures, dynamically adapting within a dynamic environment.
CANTS-N has the potential to reshape the landscape of Neural Architecture Search (NAS) and Neural Evolution (NE)
- Score: 0.0
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Crafting neural network architectures manually is a formidable challenge
often leading to suboptimal and inefficient structures. The pursuit of the
perfect neural configuration is a complex task, prompting the need for a
metaheuristic approach such as Neural Architecture Search (NAS). Drawing
inspiration from the ingenious mechanisms of nature, this paper introduces
Collaborative Ant-based Neural Topology Search (CANTS-N), pushing the
boundaries of NAS and Neural Evolution (NE). In this innovative approach,
ant-inspired agents meticulously construct neural network structures,
dynamically adapting within a dynamic environment, much like their natural
counterparts. Guided by Particle Swarm Optimization (PSO), CANTS-N's colonies
optimize architecture searches, achieving remarkable improvements in mean
squared error (MSE) over established methods, including BP-free CANTS, BP
CANTS, and ANTS. Scalable, adaptable, and forward-looking, CANTS-N has the
potential to reshape the landscape of NAS and NE. This paper provides detailed
insights into its methodology, results, and far-reaching implications.
Related papers
- SpikingJelly: An open-source machine learning infrastructure platform
for spike-based intelligence [51.6943465041708]
Spiking neural networks (SNNs) aim to realize brain-inspired intelligence on neuromorphic chips with high energy efficiency.
We contribute a full-stack toolkit for pre-processing neuromorphic datasets, building deep SNNs, optimizing their parameters, and deploying SNNs on neuromorphic chips.
arXiv Detail & Related papers (2023-10-25T13:15:17Z) - Brain-inspired Evolutionary Architectures for Spiking Neural Networks [6.607406750195899]
We explore efficient architectural optimization for Spiking Neural Networks (SNNs)
This paper evolves SNNs architecture by incorporating brain-inspired local modular structure and global cross- module connectivity.
We introduce an efficient multi-objective evolutionary algorithm based on a few-shot performance predictor, endowing SNNs with high performance, efficiency and low energy consumption.
arXiv Detail & Related papers (2023-09-11T06:39:11Z) - A Hybrid Neural Coding Approach for Pattern Recognition with Spiking
Neural Networks [53.31941519245432]
Brain-inspired spiking neural networks (SNNs) have demonstrated promising capabilities in solving pattern recognition tasks.
These SNNs are grounded on homogeneous neurons that utilize a uniform neural coding for information representation.
In this study, we argue that SNN architectures should be holistically designed to incorporate heterogeneous coding schemes.
arXiv Detail & Related papers (2023-05-26T02:52:12Z) - Backpropagation-Free 4D Continuous Ant-Based Neural Topology Search [51.18089545051242]
This work expands CANTS by adding a fourth dimension to its search space representing potential neural synaptic weights.
The experiments of this study demonstrate that the BP-Free CANTS algorithm exhibits highly competitive performance compared to both CANTS and ANTS.
arXiv Detail & Related papers (2023-05-11T10:49:07Z) - Multi-scale Evolutionary Neural Architecture Search for Deep Spiking
Neural Networks [7.271032282434803]
We propose a Multi-Scale Evolutionary Neural Architecture Search (MSE-NAS) for Spiking Neural Networks (SNNs)
MSE-NAS evolves individual neuron operation, self-organized integration of multiple circuit motifs, and global connectivity across motifs through a brain-inspired indirect evaluation function, Representational Dissimilarity Matrices (RDMs)
The proposed algorithm achieves state-of-the-art (SOTA) performance with shorter simulation steps on static datasets and neuromorphic datasets.
arXiv Detail & Related papers (2023-04-21T05:36:37Z) - Biologically inspired structure learning with reverse knowledge
distillation for spiking neural networks [19.33517163587031]
Spiking neural networks (SNNs) have superb characteristics in sensory information recognition tasks due to their biological plausibility.
The performance of some current spiking-based models is limited by their structures which means either fully connected or too-deep structures bring too much redundancy.
This paper proposes an evolutionary-based structure construction method for constructing more reasonable SNNs.
arXiv Detail & Related papers (2023-04-19T08:41:17Z) - A Self-adaptive Neuroevolution Approach to Constructing Deep Neural
Network Architectures Across Different Types [5.429458930060452]
We propose a self-adaptive neuroevolution (SANE) approach to automatically construct various lightweight Deep Neural Network (DNN) architectures for different tasks.
One of the key settings in SANE is the search space defined by cells and organs self-adapted to different DNN types.
SANE is able to self-adaptively adjust evolution exploration and exploitation to improve search efficiency.
arXiv Detail & Related papers (2022-11-27T07:40:25Z) - HiveNAS: Neural Architecture Search using Artificial Bee Colony
Optimization [0.0]
In this study, we evaluate the viability of Artificial Bee Colony optimization for Neural Architecture Search.
Our proposed framework, HiveNAS, outperforms existing state-of-the-art Swarm Intelligence-based NAS frameworks in a fraction of the time.
arXiv Detail & Related papers (2022-11-18T14:11:47Z) - On the Intrinsic Structures of Spiking Neural Networks [66.57589494713515]
Recent years have emerged a surge of interest in SNNs owing to their remarkable potential to handle time-dependent and event-driven data.
There has been a dearth of comprehensive studies examining the impact of intrinsic structures within spiking computations.
This work delves deep into the intrinsic structures of SNNs, by elucidating their influence on the expressivity of SNNs.
arXiv Detail & Related papers (2022-06-21T09:42:30Z) - Continuous Ant-Based Neural Topology Search [62.200941836913586]
This work introduces a novel, nature-inspired neural architecture search (NAS) algorithm based on ant colony optimization.
The Continuous Ant-based Neural Topology Search (CANTS) is strongly inspired by how ants move in the real world.
arXiv Detail & Related papers (2020-11-21T17:49:44Z) - Modeling from Features: a Mean-field Framework for Over-parameterized
Deep Neural Networks [54.27962244835622]
This paper proposes a new mean-field framework for over- parameterized deep neural networks (DNNs)
In this framework, a DNN is represented by probability measures and functions over its features in the continuous limit.
We illustrate the framework via the standard DNN and the Residual Network (Res-Net) architectures.
arXiv Detail & Related papers (2020-07-03T01:37:16Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.