Safe Crossover of Neural Networks Through Neuron Alignment
- URL: http://arxiv.org/abs/2003.10306v3
- Date: Mon, 4 May 2020 07:58:22 GMT
- Title: Safe Crossover of Neural Networks Through Neuron Alignment
- Authors: Thomas Uriot and Dario Izzo
- Abstract summary: We propose a two-step safe crossover(SC) operator.
First, the neurons of the parents are functionally aligned by computing how well they correlate, and only then are the parents recombined.
We show that it effectively transmits information from parents to offspring and significantly improves upon naive crossover.
- Score: 10.191757341020216
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: One of the main and largely unexplored challenges in evolving the weights of
neural networks using genetic algorithms is to find a sensible crossover
operation between parent networks. Indeed, naive crossover leads to
functionally damaged offspring that do not retain information from the parents.
This is because neural networks are invariant to permutations of neurons,
giving rise to multiple ways of representing the same solution. This is often
referred to as the competing conventions problem. In this paper, we propose a
two-step safe crossover(SC) operator. First, the neurons of the parents are
functionally aligned by computing how well they correlate, and only then are
the parents recombined. We compare two ways of measuring relationships between
neurons: Pairwise Correlation (PwC) and Canonical Correlation Analysis (CCA).
We test our safe crossover operators (SC-PwC and SC-CCA) on MNIST and CIFAR-10
by performing arithmetic crossover on the weights of feed-forward neural
network pairs. We show that it effectively transmits information from parents
to offspring and significantly improves upon naive crossover. Our method is
computationally fast,can serve as a way to explore the fitness landscape more
efficiently and makes safe crossover a potentially promising operator in future
neuroevolution research and applications.
Related papers
- Neuromorphic Wireless Split Computing with Multi-Level Spikes [69.73249913506042]
In neuromorphic computing, spiking neural networks (SNNs) perform inference tasks, offering significant efficiency gains for workloads involving sequential data.
Recent advances in hardware and software have demonstrated that embedding a few bits of payload in each spike exchanged between the spiking neurons can further enhance inference accuracy.
This paper investigates a wireless neuromorphic split computing architecture employing multi-level SNNs.
arXiv Detail & Related papers (2024-11-07T14:08:35Z) - Neural Lineage [56.34149480207817]
We introduce a novel task known as neural lineage detection, aiming at discovering lineage relationships between parent and child models.
For practical convenience, we introduce a learning-free approach, which integrates an approximation of the finetuning process into the neural network representation similarity metrics.
For the pursuit of accuracy, we introduce a learning-based lineage detector comprising encoders and a transformer detector.
arXiv Detail & Related papers (2024-06-17T01:11:53Z) - Deep Neural Crossover [1.9950682531209156]
We present a novel multi-parent crossover operator in genetic algorithms (GAs) called Deep Neural Crossover'' (DNC)
Unlike conventional GA crossover operators that rely on a random selection of parental genes, DNC leverages the capabilities of deep reinforcement learning (DRL) and an encoder-decoder architecture to select the genes.
DNC is domain-independent and can be easily applied to other problem domains.
arXiv Detail & Related papers (2024-03-17T09:50:20Z) - Forward Direct Feedback Alignment for Online Gradient Estimates of Spiking Neural Networks [0.0]
Spiking neural networks can be simulated energy efficiently on neuromorphic hardware platforms.
We propose a novel neuromorphic algorithm, the textitSpiking Forward Direct Feedback Alignment (SFDFA) algorithm.
arXiv Detail & Related papers (2024-02-06T09:07:12Z) - Modularity based linkage model for neuroevolution [4.9444321684311925]
Crossover between neural networks is considered disruptive due to the strong functional dependency between connection weights.
We propose a modularity-based linkage model at the weight level to preserve functionally dependent communities.
Our algorithm finds better, more functionally dependent linkage which leads to more successful crossover and better performance.
arXiv Detail & Related papers (2023-06-02T01:32:49Z) - Benign Overfitting for Two-layer ReLU Convolutional Neural Networks [60.19739010031304]
We establish algorithm-dependent risk bounds for learning two-layer ReLU convolutional neural networks with label-flipping noise.
We show that, under mild conditions, the neural network trained by gradient descent can achieve near-zero training loss and Bayes optimal test risk.
arXiv Detail & Related papers (2023-03-07T18:59:38Z) - Training Feedback Spiking Neural Networks by Implicit Differentiation on
the Equilibrium State [66.2457134675891]
Spiking neural networks (SNNs) are brain-inspired models that enable energy-efficient implementation on neuromorphic hardware.
Most existing methods imitate the backpropagation framework and feedforward architectures for artificial neural networks.
We propose a novel training method that does not rely on the exact reverse of the forward computation.
arXiv Detail & Related papers (2021-09-29T07:46:54Z) - Learning Interpretable Models for Coupled Networks Under Domain
Constraints [8.308385006727702]
We investigate the idea of coupled networks by focusing on interactions between structural edges and functional edges of brain networks.
We propose a novel formulation to place hard network constraints on the noise term while estimating interactions.
We validate our method on multishell diffusion and task-evoked fMRI datasets from the Human Connectome Project.
arXiv Detail & Related papers (2021-04-19T06:23:31Z) - And/or trade-off in artificial neurons: impact on adversarial robustness [91.3755431537592]
Presence of sufficient number of OR-like neurons in a network can lead to classification brittleness and increased vulnerability to adversarial attacks.
We define AND-like neurons and propose measures to increase their proportion in the network.
Experimental results on the MNIST dataset suggest that our approach holds promise as a direction for further exploration.
arXiv Detail & Related papers (2021-02-15T08:19:05Z) - Training spiking neural networks using reinforcement learning [0.0]
We propose biologically-plausible alternatives to backpropagation to facilitate the training of spiking neural networks.
We focus on investigating the candidacy of reinforcement learning rules in solving the spatial and temporal credit assignment problems.
We compare and contrast the two approaches by applying them to traditional RL domains such as gridworld, cartpole and mountain car.
arXiv Detail & Related papers (2020-05-12T17:40:36Z) - SkipGNN: Predicting Molecular Interactions with Skip-Graph Networks [70.64925872964416]
We present SkipGNN, a graph neural network approach for the prediction of molecular interactions.
SkipGNN predicts molecular interactions by not only aggregating information from direct interactions but also from second-order interactions.
We show that SkipGNN achieves superior and robust performance, outperforming existing methods by up to 28.8% of area.
arXiv Detail & Related papers (2020-04-30T16:55:58Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.