Using Neural Networks and Diversifying Differential Evolution for
Dynamic Optimisation
- URL: http://arxiv.org/abs/2008.04002v1
- Date: Mon, 10 Aug 2020 10:07:43 GMT
- Title: Using Neural Networks and Diversifying Differential Evolution for
Dynamic Optimisation
- Authors: Maryam Hasani Shoreh, Renato Hermoza Aragon\'es, Frank Neumann
- Abstract summary: We investigate whether neural networks are competitive and the possibility of integrating them to improve the results.
The results show the significance of the improvement when integrating the neural network and diversity mechanisms depends on the type and the frequency of changes.
- Score: 11.228244128564512
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Dynamic optimisation occurs in a variety of real-world problems. To tackle
these problems, evolutionary algorithms have been extensively used due to their
effectiveness and minimum design effort. However, for dynamic problems, extra
mechanisms are required on top of standard evolutionary algorithms. Among them,
diversity mechanisms have proven to be competitive in handling dynamism, and
recently, the use of neural networks have become popular for this purpose.
Considering the complexity of using neural networks in the process compared to
simple diversity mechanisms, we investigate whether they are competitive and
the possibility of integrating them to improve the results. However, for a fair
comparison, we need to consider the same time budget for each algorithm. Thus,
instead of the usual number of fitness evaluations as the measure for the
available time between changes, we use wall clock timing. The results show the
significance of the improvement when integrating the neural network and
diversity mechanisms depends on the type and the frequency of changes.
Moreover, we observe that for differential evolution, having a proper diversity
in population when using neural networks plays a key role in the neural
network's ability to improve the results.
Related papers
- Advancing Spatio-Temporal Processing in Spiking Neural Networks through Adaptation [6.233189707488025]
In this article, we analyze the dynamical, computational, and learning properties of adaptive LIF neurons and networks thereof.
We show that the superiority of networks of adaptive LIF neurons extends to the prediction and generation of complex time series.
arXiv Detail & Related papers (2024-08-14T12:49:58Z) - Graph Neural Networks for Learning Equivariant Representations of Neural Networks [55.04145324152541]
We propose to represent neural networks as computational graphs of parameters.
Our approach enables a single model to encode neural computational graphs with diverse architectures.
We showcase the effectiveness of our method on a wide range of tasks, including classification and editing of implicit neural representations.
arXiv Detail & Related papers (2024-03-18T18:01:01Z) - How neural networks learn to classify chaotic time series [77.34726150561087]
We study the inner workings of neural networks trained to classify regular-versus-chaotic time series.
We find that the relation between input periodicity and activation periodicity is key for the performance of LKCNN models.
arXiv Detail & Related papers (2023-06-04T08:53:27Z) - Permutation Equivariant Neural Functionals [92.0667671999604]
This work studies the design of neural networks that can process the weights or gradients of other neural networks.
We focus on the permutation symmetries that arise in the weights of deep feedforward networks because hidden layer neurons have no inherent order.
In our experiments, we find that permutation equivariant neural functionals are effective on a diverse set of tasks.
arXiv Detail & Related papers (2023-02-27T18:52:38Z) - Multiobjective Evolutionary Pruning of Deep Neural Networks with
Transfer Learning for improving their Performance and Robustness [15.29595828816055]
This work proposes MO-EvoPruneDeepTL, a multi-objective evolutionary pruning algorithm.
We use Transfer Learning to adapt the last layers of Deep Neural Networks, by replacing them with sparse layers evolved by a genetic algorithm.
Experiments show that our proposal achieves promising results in all the objectives, and direct relation are presented.
arXiv Detail & Related papers (2023-02-20T19:33:38Z) - Sparse Mutation Decompositions: Fine Tuning Deep Neural Networks with
Subspace Evolution [0.0]
A popular subclass of neuroevolutionary methods, called evolution strategies, relies on dense noise perturbations to mutate networks.
We introduce an approach to alleviating this problem by decomposing dense mutations into low-dimensional subspaces.
We conduct the first large scale exploration of neuroevolutionary fine tuning and ensembling on the notoriously difficult ImageNet dataset.
arXiv Detail & Related papers (2023-02-12T01:27:26Z) - Neuronal diversity can improve machine learning for physics and beyond [0.0]
We construct neural networks from neurons that learn their own activation functions.
Sub-networks instantiate the neurons, which meta-learn especially efficient sets of nonlinear responses.
arXiv Detail & Related papers (2022-04-09T01:48:41Z) - Data-driven emergence of convolutional structure in neural networks [83.4920717252233]
We show how fully-connected neural networks solving a discrimination task can learn a convolutional structure directly from their inputs.
By carefully designing data models, we show that the emergence of this pattern is triggered by the non-Gaussian, higher-order local structure of the inputs.
arXiv Detail & Related papers (2022-02-01T17:11:13Z) - Dynamic Neural Diversification: Path to Computationally Sustainable
Neural Networks [68.8204255655161]
Small neural networks with a constrained number of trainable parameters, can be suitable resource-efficient candidates for many simple tasks.
We explore the diversity of the neurons within the hidden layer during the learning process.
We analyze how the diversity of the neurons affects predictions of the model.
arXiv Detail & Related papers (2021-09-20T15:12:16Z) - LocalDrop: A Hybrid Regularization for Deep Neural Networks [98.30782118441158]
We propose a new approach for the regularization of neural networks by the local Rademacher complexity called LocalDrop.
A new regularization function for both fully-connected networks (FCNs) and convolutional neural networks (CNNs) has been developed based on the proposed upper bound of the local Rademacher complexity.
arXiv Detail & Related papers (2021-03-01T03:10:11Z) - Exploring weight initialization, diversity of solutions, and degradation
in recurrent neural networks trained for temporal and decision-making tasks [0.0]
Recurrent Neural Networks (RNNs) are frequently used to model aspects of brain function and structure.
In this work, we trained small fully-connected RNNs to perform temporal and flow control tasks with time-varying stimuli.
arXiv Detail & Related papers (2019-06-03T21:56:48Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.