Searching Search Spaces: Meta-evolving a Geometric Encoding for Neural Networks
- URL: http://arxiv.org/abs/2403.14019v1
- Date: Wed, 20 Mar 2024 22:40:53 GMT
- Title: Searching Search Spaces: Meta-evolving a Geometric Encoding for Neural Networks
- Authors: Tarek Kunze, Paul Templier, Dennis G Wilson,
- Abstract summary: We show that GENE with a learned function can outperform both direct encoding and the hand-crafted distances.
We study how the encoding impacts neural network properties.
- Score: 0.0
- License: http://creativecommons.org/licenses/by-sa/4.0/
- Abstract: In evolutionary policy search, neural networks are usually represented using a direct mapping: each gene encodes one network weight. Indirect encoding methods, where each gene can encode for multiple weights, shorten the genome to reduce the dimensions of the search space and better exploit permutations and symmetries. The Geometric Encoding for Neural network Evolution (GENE) introduced an indirect encoding where the weight of a connection is computed as the (pseudo-)distance between the two linked neurons, leading to a genome size growing linearly with the number of genes instead of quadratically in direct encoding. However GENE still relies on hand-crafted distance functions with no prior optimization. Here we show that better performing distance functions can be found for GENE using Cartesian Genetic Programming (CGP) in a meta-evolution approach, hence optimizing the encoding to create a search space that is easier to exploit. We show that GENE with a learned function can outperform both direct encoding and the hand-crafted distances, generalizing on unseen problems, and we study how the encoding impacts neural network properties.
Related papers
- Scalable Graph Compressed Convolutions [68.85227170390864]
We propose a differentiable method that applies permutations to calibrate input graphs for Euclidean convolution.
Based on the graph calibration, we propose the Compressed Convolution Network (CoCN) for hierarchical graph representation learning.
arXiv Detail & Related papers (2024-07-26T03:14:13Z) - Recurrent Distance Filtering for Graph Representation Learning [34.761926988427284]
Graph neural networks based on iterative one-hop message passing have been shown to struggle in harnessing the information from distant nodes effectively.
We propose a new architecture to reconcile these challenges.
Our model aggregates other nodes by their shortest distances to the target and uses a linear RNN to encode the sequence of hop representations.
arXiv Detail & Related papers (2023-12-03T23:36:16Z) - Permutation Equivariant Neural Functionals [92.0667671999604]
This work studies the design of neural networks that can process the weights or gradients of other neural networks.
We focus on the permutation symmetries that arise in the weights of deep feedforward networks because hidden layer neurons have no inherent order.
In our experiments, we find that permutation equivariant neural functionals are effective on a diverse set of tasks.
arXiv Detail & Related papers (2023-02-27T18:52:38Z) - Variable Bitrate Neural Fields [75.24672452527795]
We present a dictionary method for compressing feature grids, reducing their memory consumption by up to 100x.
We formulate the dictionary optimization as a vector-quantized auto-decoder problem which lets us learn end-to-end discrete neural representations in a space where no direct supervision is available.
arXiv Detail & Related papers (2022-06-15T17:58:34Z) - Simple Genetic Operators are Universal Approximators of Probability
Distributions (and other Advantages of Expressive Encodings) [27.185579156106694]
This paper characterizes the inherent power of evolutionary algorithms.
It shows that expressive encodings can be a key to understanding and realizing the full power of evolution.
arXiv Detail & Related papers (2022-02-19T20:54:37Z) - COIN++: Data Agnostic Neural Compression [55.27113889737545]
COIN++ is a neural compression framework that seamlessly handles a wide range of data modalities.
We demonstrate the effectiveness of our method by compressing various data modalities.
arXiv Detail & Related papers (2022-01-30T20:12:04Z) - Rewiring with Positional Encodings for Graph Neural Networks [37.394229290996364]
Several recent works use positional encodings to extend receptive fields of graph neural network layers equipped with attention mechanisms.
We use positional encodings to expand receptive fields to $r$-hop neighborhoods.
We obtain improvements on a variety of models and datasets and reach competitive performance using traditional GNNs or graph Transformers.
arXiv Detail & Related papers (2022-01-29T22:26:02Z) - Swin-Unet: Unet-like Pure Transformer for Medical Image Segmentation [63.46694853953092]
Swin-Unet is an Unet-like pure Transformer for medical image segmentation.
tokenized image patches are fed into the Transformer-based U-shaped decoder-Decoder architecture.
arXiv Detail & Related papers (2021-05-12T09:30:26Z) - Optimizing Memory Placement using Evolutionary Graph Reinforcement
Learning [56.83172249278467]
We introduce Evolutionary Graph Reinforcement Learning (EGRL), a method designed for large search spaces.
We train and validate our approach directly on the Intel NNP-I chip for inference.
We additionally achieve 28-78% speed-up compared to the native NNP-I compiler on all three workloads.
arXiv Detail & Related papers (2020-07-14T18:50:12Z) - Evolutionary NAS with Gene Expression Programming of Cellular Encoding [0.0]
We present a new generative encoding scheme which embeds local graph transformations in chromosomes of linear fixed-length string.
In experiments, the effectiveness of SLGE is shown in discovering architectures that improve the performance of CNN architectures.
arXiv Detail & Related papers (2020-05-27T01:19:32Z) - GeneCAI: Genetic Evolution for Acquiring Compact AI [36.04715576228068]
Deep Neural Networks (DNNs) are evolving towards more complex architectures to achieve higher inference accuracy.
Model compression techniques can be leveraged to efficiently deploy such compute-intensive architectures on resource-limited mobile devices.
This paper introduces GeneCAI, a novel optimization method that automatically learns how to tune per-layer compression hyper- parameters.
arXiv Detail & Related papers (2020-04-08T20:56:37Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.