Node Preservation and its Effect on Crossover in Cartesian Genetic Programming
- URL: http://arxiv.org/abs/2511.00634v1
- Date: Sat, 01 Nov 2025 17:26:56 GMT
- Title: Node Preservation and its Effect on Crossover in Cartesian Genetic Programming
- Authors: Mark Kocherovsky, Illya Bakurov, Wolfgang Banzhaf,
- Abstract summary: We compare basic crossover methods, namely one-point and uniform, to variants in which nodes are preserved''<n>We find that node preservation in both mutation and crossover improves search using symbolic regression benchmark problems.
- Score: 2.295470702460389
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: While crossover is a critical and often indispensable component in other forms of Genetic Programming, such as Linear- and Tree-based, it has consistently been claimed that it deteriorates search performance in CGP. As a result, a mutation-alone $(1+\lambda)$ evolutionary strategy has become the canonical approach for CGP. Although several operators have been developed that demonstrate an increased performance over the canonical method, a general solution to the problem is still lacking. In this paper, we compare basic crossover methods, namely one-point and uniform, to variants in which nodes are ``preserved,'' including the subgraph crossover developed by Roman Kalkreuth, the difference being that when ``node preservation'' is active, crossover is not allowed to break apart instructions. We also compare a node mutation operator to the traditional point mutation; the former simply replaces an entire node with a new one. We find that node preservation in both mutation and crossover improves search using symbolic regression benchmark problems, moving the field towards a general solution to CGP crossover.
Related papers
- Beyond Fixed Depth: Adaptive Graph Neural Networks for Node Classification Under Varying Homophily [10.0426843232642]
We develop a theoretical framework that links local structural and label characteristics to information propagation dynamics.<n>We propose a novel adaptive-depth GNN architecture that dynamically selects node-specific aggregation depths.<n>Our method seamlessly adapts to both homophilic and heterophilic patterns within a unified model.
arXiv Detail & Related papers (2025-11-10T01:37:51Z) - Analysing the Influence of Reorder Strategies for Cartesian Genetic Programming [0.0]
We introduce three novel operators which reorder the genotype of a graph defined by CGP.
We show that the number of iterations until a solution is found and/or the fitness value improves by using CGP with a reorder method.
However, there is no consistently best performing reorder operator.
arXiv Detail & Related papers (2024-10-01T08:59:01Z) - Scalable Graph Compressed Convolutions [68.85227170390864]
We propose a differentiable method that applies permutations to calibrate input graphs for Euclidean convolution.
Based on the graph calibration, we propose the Compressed Convolution Network (CoCN) for hierarchical graph representation learning.
arXiv Detail & Related papers (2024-07-26T03:14:13Z) - HHGT: Hierarchical Heterogeneous Graph Transformer for Heterogeneous Graph Representation Learning [19.667806439200792]
We develop an innovative structure named (k,t)-ring neighborhood, where nodes are initially organized by their distance, forming different non-overlapping k-ring neighborhoods for each distance.
Within each k-ring structure, nodes are further categorized into different groups according to their types, thus emphasizing the heterogeneity of both distances and types in HINs naturally.
Experiments are conducted on downstream tasks to verify HHGT's superiority over 14 baselines, with a notable improvement of up to 24.75% in NMI and 29.25% in ARI for node clustering task.
arXiv Detail & Related papers (2024-07-18T04:58:27Z) - Alleviating Structural Distribution Shift in Graph Anomaly Detection [70.1022676681496]
Graph anomaly detection (GAD) is a challenging binary classification problem.
Gallon neural networks (GNNs) benefit the classification of normals from aggregating homophilous neighbors.
We propose a framework to mitigate the effect of heterophilous neighbors and make them invariant.
arXiv Detail & Related papers (2024-01-25T13:07:34Z) - Provable Training for Graph Contrastive Learning [58.8128675529977]
Graph Contrastive Learning (GCL) has emerged as a popular training approach for learning node embeddings from augmented graphs without labels.
We show that the training of GCL is indeed imbalanced across all nodes.
We propose the metric "node compactness", which is the lower bound of how a node follows the GCL principle.
arXiv Detail & Related papers (2023-09-25T08:23:53Z) - Shortest Edit Path Crossover: A Theory-driven Solution to the
Permutation Problem in Evolutionary Neural Architecture Search [20.948038514886377]
Population-based search has emerged as a possible alternative to Reinforcement Learning (RL) for black-box neural architecture search (NAS)
This paper presents the first theoretical analysis of the behaviors of mutation, crossover and RL in black-box NAS.
It proposes a new crossover operator based on the shortest edit path (SEP) in graph space.
arXiv Detail & Related papers (2022-10-25T13:39:55Z) - RAW-GNN: RAndom Walk Aggregation based Graph Neural Network [48.139599737263445]
We introduce a novel aggregation mechanism and develop a RAndom Walk Aggregation-based Graph Neural Network (called RAW-GNN) method.
The new method utilizes breadth-first random walk search to capture homophily information and depth-first search to collect heterophily information.
It replaces the conventional neighborhoods with path-based neighborhoods and introduces a new path-based aggregator based on Recurrent Neural Networks.
arXiv Detail & Related papers (2022-06-28T12:19:01Z) - ResNorm: Tackling Long-tailed Degree Distribution Issue in Graph Neural
Networks via Normalization [80.90206641975375]
This paper focuses on improving the performance of GNNs via normalization.
By studying the long-tailed distribution of node degrees in the graph, we propose a novel normalization method for GNNs.
The $scale$ operation of ResNorm reshapes the node-wise standard deviation (NStd) distribution so as to improve the accuracy of tail nodes.
arXiv Detail & Related papers (2022-06-16T13:49:09Z) - Asymptotics of $\ell_2$ Regularized Network Embeddings [0.0]
A common approach to solving tasks, such as node classification or link prediction, begins by learning a Euclidean embedding of the nodes of the network.
For unsupervised random walk methods such as DeepWalk and node2vec, adding a $ell$ penalty on the embedding vectors to the loss leads to improved downstream task performance.
arXiv Detail & Related papers (2022-01-05T16:33:14Z) - Orthogonal Graph Neural Networks [53.466187667936026]
Graph neural networks (GNNs) have received tremendous attention due to their superiority in learning node representations.
stacking more convolutional layers significantly decreases the performance of GNNs.
We propose a novel Ortho-GConv, which could generally augment the existing GNN backbones to stabilize the model training and improve the model's generalization performance.
arXiv Detail & Related papers (2021-09-23T12:39:01Z) - Towards Deeper Graph Neural Networks with Differentiable Group
Normalization [61.20639338417576]
Graph neural networks (GNNs) learn the representation of a node by aggregating its neighbors.
Over-smoothing is one of the key issues which limit the performance of GNNs as the number of layers increases.
We introduce two over-smoothing metrics and a novel technique, i.e., differentiable group normalization (DGN)
arXiv Detail & Related papers (2020-06-12T07:18:02Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.