Initialisation and Grammar Design in Grammar-Guided Evolutionary
Computation
- URL: http://arxiv.org/abs/2204.07410v1
- Date: Fri, 15 Apr 2022 10:15:40 GMT
- Title: Initialisation and Grammar Design in Grammar-Guided Evolutionary
Computation
- Authors: Grant Dick and Peter A. Whigham
- Abstract summary: We show that genetic programming (CFG-GP) is less sensitive to initialisation and grammar design than random search and GE.
We also demonstrate that observed cases of poor performance by CFG-GP are managed through simple adjustment of tuning parameters.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Grammars provide a convenient and powerful mechanism to define the space of
possible solutions for a range of problems. However, when used in grammatical
evolution (GE), great care must be taken in the design of a grammar to ensure
that the polymorphic nature of the genotype-to-phenotype mapping does not
impede search. Additionally, recent work has highlighted the importance of the
initialisation method on GE's performance. While recent work has shed light on
the matters of initialisation and grammar design with respect to GE, their
impact on other methods, such as random search and context-free grammar genetic
programming (CFG-GP), is largely unknown. This paper examines GE, random search
and CFG-GP under a range of benchmark problems using several different
initialisation routines and grammar designs. The results suggest that CFG-GP is
less sensitive to initialisation and grammar design than both GE and random
search: we also demonstrate that observed cases of poor performance by CFG-GP
are managed through simple adjustment of tuning parameters. We conclude that
CFG-GP is a strong base from which to conduct grammar-guided evolutionary
search, and that future work should focus on understanding the parameter space
of CFG-GP for better application.
Related papers
- Deep Graph Anomaly Detection: A Survey and New Perspectives [86.84201183954016]
Graph anomaly detection (GAD) aims to identify unusual graph instances (nodes, edges, subgraphs, or graphs)
Deep learning approaches, graph neural networks (GNNs) in particular, have been emerging as a promising paradigm for GAD.
arXiv Detail & Related papers (2024-09-16T03:05:11Z) - VQDNA: Unleashing the Power of Vector Quantization for Multi-Species Genomic Sequence Modeling [60.91599380893732]
VQDNA is a general-purpose framework that renovates genome tokenization from the perspective of genome vocabulary learning.
By leveraging vector-quantized codebooks as learnable vocabulary, VQDNA can adaptively tokenize genomes into pattern-aware embeddings.
arXiv Detail & Related papers (2024-05-13T20:15:03Z) - Efficient and Scalable Fine-Tune of Language Models for Genome
Understanding [49.606093223945734]
We present textscLingo: textscLanguage prefix ftextscIne-tuning for textscGentextscOmes.
Unlike DNA foundation models, textscLingo strategically leverages natural language foundation models' contextual cues.
textscLingo further accommodates numerous downstream fine-tune tasks by an adaptive rank sampling method.
arXiv Detail & Related papers (2024-02-12T21:40:45Z) - Local Search, Semantics, and Genetic Programming: a Global Analysis [7.486818142115522]
Geometric Semantic Geometric Programming (GSGP) is one of the most prominent Genetic Programming (GP) variants.
Here we explore multiple possibilities to limit the overfitting of GSM-LS and GSGP-reg.
Results show that it is possible to consistently outperform standard GSGP on both training and unseen data.
arXiv Detail & Related papers (2023-05-26T14:13:03Z) - Physics of Language Models: Part 1, Learning Hierarchical Language Structures [51.68385617116854]
Transformer-based language models are effective but complex, and understanding their inner workings is a significant challenge.
We introduce a family of synthetic CFGs that produce hierarchical rules, capable of generating lengthy sentences.
We demonstrate that generative models like GPT can accurately learn this CFG language and generate sentences based on it.
arXiv Detail & Related papers (2023-05-23T04:28:16Z) - Adaptive Fine-Grained Predicates Learning for Scene Graph Generation [122.4588401267544]
General Scene Graph Generation (SGG) models tend to predict head predicates and re-balancing strategies prefer tail categories.
We propose an Adaptive Fine-Grained Predicates Learning (FGPL-A) which aims at differentiating hard-to-distinguish predicates for SGG.
Our proposed model-agnostic strategy significantly boosts performance of benchmark models on VG-SGG and GQA-SGG datasets by up to 175% and 76% on Mean Recall@100, achieving new state-of-the-art performance.
arXiv Detail & Related papers (2022-07-11T03:37:57Z) - Probabilistic Structured Grammatical Evolution [0.5156484100374059]
We propose Probabilistic Structured Grammatical Evolution (PSGE)
PSGE combines the Structured Grammatical Evolution (SGE) and Probabilistic Grammatical Evolution (PGE) representation variants and mapping mechanisms.
PSGE statistically outperformed Grammatical Evolution (GE) on all six benchmark problems studied.
arXiv Detail & Related papers (2022-05-21T22:29:10Z) - Incremental Ensemble Gaussian Processes [53.3291389385672]
We propose an incremental ensemble (IE-) GP framework, where an EGP meta-learner employs an it ensemble of GP learners, each having a unique kernel belonging to a prescribed kernel dictionary.
With each GP expert leveraging the random feature-based approximation to perform online prediction and model update with it scalability, the EGP meta-learner capitalizes on data-adaptive weights to synthesize the per-expert predictions.
The novel IE-GP is generalized to accommodate time-varying functions by modeling structured dynamics at the EGP meta-learner and within each GP learner.
arXiv Detail & Related papers (2021-10-13T15:11:25Z) - Probabilistic Grammatical Evolution [0.6445605125467573]
We propose Probabilistic Grammatical Evolution (PGE) to address some of its main issues and improve its performance.
We resort to a Probabilistic Context-Free Grammar (PCFG) where its probabilities are adapted during the evolutionary process.
We evaluate the performance of PGE in two regression problems and compare it with GE and Structured Grammatical Evolution (SGE)
arXiv Detail & Related papers (2021-03-15T13:54:26Z) - Learning Parametrised Graph Shift Operators [16.89638650246974]
Network data is, implicitly or explicitly, always represented using a graph shift operator (GSO)
The PGSO is suggested as a replacement of the standard GSOs that are used in state-of-the-art GNN architectures.
The accuracy of state-of-the-art GNN architectures is improved by the inclusion of the PGSO in both node- and graph-classification tasks.
arXiv Detail & Related papers (2021-01-25T13:01:26Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.