Deep Neural Crossover
- URL: http://arxiv.org/abs/2403.11159v2
- Date: Thu, 18 Jul 2024 20:11:44 GMT
- Title: Deep Neural Crossover
- Authors: Eliad Shem-Tov, Achiya Elyasaf,
- Abstract summary: We present a novel multi-parent crossover operator in genetic algorithms (GAs) called Deep Neural Crossover'' (DNC)
Unlike conventional GA crossover operators that rely on a random selection of parental genes, DNC leverages the capabilities of deep reinforcement learning (DRL) and an encoder-decoder architecture to select the genes.
DNC is domain-independent and can be easily applied to other problem domains.
- Score: 1.9950682531209156
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: We present a novel multi-parent crossover operator in genetic algorithms (GAs) called ``Deep Neural Crossover'' (DNC). Unlike conventional GA crossover operators that rely on a random selection of parental genes, DNC leverages the capabilities of deep reinforcement learning (DRL) and an encoder-decoder architecture to select the genes. Specifically, we use DRL to learn a policy for selecting promising genes. The policy is stochastic, to maintain the stochastic nature of GAs, representing a distribution for selecting genes with a higher probability of improving fitness. Our architecture features a recurrent neural network (RNN) to encode the parental genomes into latent memory states, and a decoder RNN that utilizes an attention-based pointing mechanism to generate a distribution over the next selected gene in the offspring. To improve the training time, we present a pre-training approach, wherein the architecture is initially trained on a single problem within a specific domain and then applied to solving other problems of the same domain. We compare DNC to known operators from the literature over two benchmark domains -- bin packing and graph coloring. We compare with both two- and three-parent crossover, outperforming all baselines. DNC is domain-independent and can be easily applied to other problem domains.
Related papers
- Deep Learning-Based Operators for Evolutionary Algorithms [1.7751300245073598]
We present two novel domain-independent genetic operators that harness the capabilities of deep learning: a crossover operator for genetic algorithms and a mutation operator for genetic programming.
arXiv Detail & Related papers (2024-07-15T07:05:34Z) - DCNN: Dual Cross-current Neural Networks Realized Using An Interactive Deep Learning Discriminator for Fine-grained Objects [48.65846477275723]
This study proposes novel dual-current neural networks (DCNN) to improve the accuracy of fine-grained image classification.
The main novel design features for constructing a weakly supervised learning backbone model DCNN include (a) extracting heterogeneous data, (b) keeping the feature map resolution unchanged, (c) expanding the receptive field, and (d) fusing global representations and local features.
arXiv Detail & Related papers (2024-05-07T07:51:28Z) - Bayesian Neural Networks with Domain Knowledge Priors [52.80929437592308]
We propose a framework for integrating general forms of domain knowledge into a BNN prior.
We show that BNNs using our proposed domain knowledge priors outperform those with standard priors.
arXiv Detail & Related papers (2024-02-20T22:34:53Z) - Unveiling the Unseen: Identifiable Clusters in Trained Depthwise
Convolutional Kernels [56.69755544814834]
Recent advances in depthwise-separable convolutional neural networks (DS-CNNs) have led to novel architectures.
This paper reveals another striking property of DS-CNN architectures: discernible and explainable patterns emerge in their trained depthwise convolutional kernels in all layers.
arXiv Detail & Related papers (2024-01-25T19:05:53Z) - Reinforcement Learning for Node Selection in Branch-and-Bound [52.2648997215667]
Current state-of-the-art selectors utilize either hand-crafted ensembles that automatically switch between naive sub-node selectors, or learned node selectors that rely on individual node data.
We propose a novel simulation technique that uses reinforcement learning (RL) while considering the entire tree state, rather than just isolated nodes.
arXiv Detail & Related papers (2023-09-29T19:55:56Z) - DeepProphet2 -- A Deep Learning Gene Recommendation Engine [0.0]
The paper discusses the potential advantages of gene recommendation performed by artificial intelligence (AI)
A transformer-based model has been trained on a well-curated freely available paper corpus, PubMed.
A set of use cases illustrates the algorithm's potential applications in a real word setting.
arXiv Detail & Related papers (2022-08-03T08:54:13Z) - Generative Adversarial Method Based on Neural Tangent Kernels [13.664682865991255]
We propose a new generative algorithm called generative adversarial NTK (GA-NTK)
GA-NTK can generate images comparable to those by GANs but is much easier to train under various conditions.
We conduct extensive experiments on real-world datasets, and the results show that GA-NTK can generate images comparable to those by GANs but is much easier to train under various conditions.
arXiv Detail & Related papers (2022-04-08T14:17:46Z) - Zero-shot Domain Adaptation of Heterogeneous Graphs via Knowledge
Transfer Networks [72.82524864001691]
heterogeneous graph neural networks (HGNNs) have shown superior performance as powerful representation learning techniques.
There is no direct way to learn using labels rooted at different node types.
In this work, we propose a novel domain adaptation method, Knowledge Transfer Networks for HGNNs (HGNN-KTN)
arXiv Detail & Related papers (2022-03-03T21:00:23Z) - Top-N: Equivariant set and graph generation without exchangeability [61.24699600833916]
We consider one-shot probabilistic decoders that map a vector-shaped prior to a distribution over sets or graphs.
These functions can be integrated into variational autoencoders (VAE), generative adversarial networks (GAN) or normalizing flows.
Top-n is a deterministic, non-exchangeable set creation mechanism which learns to select the most relevant points from a trainable reference set.
arXiv Detail & Related papers (2021-10-05T14:51:19Z) - Safe Crossover of Neural Networks Through Neuron Alignment [10.191757341020216]
We propose a two-step safe crossover(SC) operator.
First, the neurons of the parents are functionally aligned by computing how well they correlate, and only then are the parents recombined.
We show that it effectively transmits information from parents to offspring and significantly improves upon naive crossover.
arXiv Detail & Related papers (2020-03-23T14:50:00Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.