Cartesian Genetic Programming Approach for Designing Convolutional Neural Networks
- URL: http://arxiv.org/abs/2410.00129v2
- Date: Sun, 13 Oct 2024 23:43:33 GMT
- Title: Cartesian Genetic Programming Approach for Designing Convolutional Neural Networks
- Authors: Maciej Krzywda, Szymon Ĺukasik, Amir Gandomi H,
- Abstract summary: In designing artificial neural networks, one crucial aspect of the innovative approach is suggesting a novel neural architecture.
In this work, we use pure Genetic Programming Approach to design CNNs, which employs only one genetic operation.
In the course of preliminary experiments, our methodology yields promising results.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: The present study covers an approach to neural architecture search (NAS) using Cartesian genetic programming (CGP) for the design and optimization of Convolutional Neural Networks (CNNs). In designing artificial neural networks, one crucial aspect of the innovative approach is suggesting a novel neural architecture. Currently used architectures have mostly been developed manually by human experts, which is a time-consuming and error-prone process. In this work, we use pure Genetic Programming Approach to design CNNs, which employs only one genetic operation, i.e., mutation. In the course of preliminary experiments, our methodology yields promising results.
Related papers
- Graph Neural Networks for Learning Equivariant Representations of Neural Networks [55.04145324152541]
We propose to represent neural networks as computational graphs of parameters.
Our approach enables a single model to encode neural computational graphs with diverse architectures.
We showcase the effectiveness of our method on a wide range of tasks, including classification and editing of implicit neural representations.
arXiv Detail & Related papers (2024-03-18T18:01:01Z) - An automatic selection of optimal recurrent neural network architecture
for processes dynamics modelling purposes [0.0]
The research has included four original proposals of algorithms dedicated to neural network architecture search.
Algorithms have been based on well-known optimisation techniques such as evolutionary algorithms and gradient descent methods.
The research involved an extended validation study based on data generated from a mathematical model of the fast processes occurring in a pressurised water nuclear reactor.
arXiv Detail & Related papers (2023-09-25T11:06:35Z) - A Hybrid Neural Coding Approach for Pattern Recognition with Spiking
Neural Networks [53.31941519245432]
Brain-inspired spiking neural networks (SNNs) have demonstrated promising capabilities in solving pattern recognition tasks.
These SNNs are grounded on homogeneous neurons that utilize a uniform neural coding for information representation.
In this study, we argue that SNN architectures should be holistically designed to incorporate heterogeneous coding schemes.
arXiv Detail & Related papers (2023-05-26T02:52:12Z) - CP-CNN: Core-Periphery Principle Guided Convolutional Neural Network [9.015666133509857]
We implement the core-periphery principle in the design of network wiring patterns and the sparsification of the convolution operation.
Our work contributes to the growing field of brain-inspired AI by incorporating insights from the human brain into the design of neural networks.
arXiv Detail & Related papers (2023-03-27T03:59:43Z) - Permutation Equivariant Neural Functionals [92.0667671999604]
This work studies the design of neural networks that can process the weights or gradients of other neural networks.
We focus on the permutation symmetries that arise in the weights of deep feedforward networks because hidden layer neurons have no inherent order.
In our experiments, we find that permutation equivariant neural functionals are effective on a diverse set of tasks.
arXiv Detail & Related papers (2023-02-27T18:52:38Z) - Towards Theoretically Inspired Neural Initialization Optimization [66.04735385415427]
We propose a differentiable quantity, named GradCosine, with theoretical insights to evaluate the initial state of a neural network.
We show that both the training and test performance of a network can be improved by maximizing GradCosine under norm constraint.
Generalized from the sample-wise analysis into the real batch setting, NIO is able to automatically look for a better initialization with negligible cost.
arXiv Detail & Related papers (2022-10-12T06:49:16Z) - Evolving Deep Neural Networks for Collaborative Filtering [3.302151868255641]
Collaborative Filtering (CF) is widely used in recommender systems to model user-item interactions.
We introduce the genetic algorithm into the process of designing Deep Neural Networks (DNNs)
arXiv Detail & Related papers (2021-11-15T13:57:31Z) - Neural Architecture Search based on Cartesian Genetic Programming Coding
Method [6.519170476143571]
We propose an evolutionary approach of NAS based on CGP, called CGPNAS, to solve sentence classification task.
The experimental results show that the searched architectures are comparable with the performance of human-designed architectures.
arXiv Detail & Related papers (2021-03-12T09:51:03Z) - Differentiable Neural Architecture Learning for Efficient Neural Network
Design [31.23038136038325]
We introduce a novel emph architecture parameterisation based on scaled sigmoid function.
We then propose a general emphiable Neural Architecture Learning (DNAL) method to optimize the neural architecture without the need to evaluate candidate neural networks.
arXiv Detail & Related papers (2021-03-03T02:03:08Z) - A Semi-Supervised Assessor of Neural Architectures [157.76189339451565]
We employ an auto-encoder to discover meaningful representations of neural architectures.
A graph convolutional neural network is introduced to predict the performance of architectures.
arXiv Detail & Related papers (2020-05-14T09:02:33Z) - Alpha Discovery Neural Network based on Prior Knowledge [55.65102700986668]
Genetic programming (GP) is the state-of-the-art in financial automated feature construction task.
This paper proposes Alpha Discovery Neural Network (ADNN), a tailored neural network structure which can automatically construct diversified financial technical indicators.
arXiv Detail & Related papers (2019-12-26T03:10:17Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.