Fast Immune System Inspired Hypermutation Operators for Combinatorial
Optimisation
- URL: http://arxiv.org/abs/2009.00990v1
- Date: Tue, 1 Sep 2020 16:22:57 GMT
- Title: Fast Immune System Inspired Hypermutation Operators for Combinatorial
Optimisation
- Authors: D. Corus, P. S. Oliveto, D. Yazdani
- Abstract summary: We propose modifications to the traditional hypermutations with mutation potential.
We show the superiority of the HMP operators to the traditional ones in an analysis of the complete standard Opt-IA AIS.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Various studies have shown that immune system inspired hypermutation
operators can allow artificial immune systems (AIS) to be very efficient at
escaping local optima of multimodal optimisation problems. However, this
efficiency comes at the expense of considerably slower runtimes during the
exploitation phase compared to standard evolutionary algorithms. We propose
modifications to the traditional `hypermutations with mutation potential' (HMP)
that allow them to be efficient at exploitation as well as maintaining their
effective explorative characteristics. Rather than deterministically evaluating
fitness after each bit-flip of a hypermutation, we sample the fitness function
stochastically with a `parabolic' distribution which allows the `stop at first
constructive mutation' (FCM) variant of HMP to reduce the linear amount of
wasted function evaluations when no improvement is found to a constant. The
stochastic distribution also allows the removal of the FCM mechanism altogether
as originally desired in the design of the HMP operators. We rigorously prove
the effectiveness of the proposed operators for all the benchmark functions
where the performance of HMP is rigorously understood in the literature and
validating the gained insights to show linear speed-ups for the identification
of high quality approximate solutions to classical NP-Hard problems from
combinatorial optimisation. We then show the superiority of the HMP operators
to the traditional ones in an analysis of the complete standard Opt-IA AIS,
where the stochastic evaluation scheme allows HMP and ageing operators to work
in harmony. Through a comparative performance study of other `fast mutation'
operators from the literature, we conclude that a power-law distribution for
the parabolic evaluation scheme is the best compromise in black box scenarios
where little problem knowledge is available.
Related papers
- Sample-efficient Bayesian Optimisation Using Known Invariances [56.34916328814857]
We show that vanilla and constrained BO algorithms are inefficient when optimising invariant objectives.
We derive a bound on the maximum information gain of these invariant kernels.
We use our method to design a current drive system for a nuclear fusion reactor, finding a high-performance solution.
arXiv Detail & Related papers (2024-10-22T12:51:46Z) - Testing the Efficacy of Hyperparameter Optimization Algorithms in Short-Term Load Forecasting [0.0]
We use the Panama Electricity dataset to evaluate HPO algorithms' performances on a surrogate forecasting algorithm, XGBoost, in terms of accuracy (i.e., MAPE, $R2$) and runtime.
Results reveal significant runtime advantages for HPO algorithms over Random Search.
arXiv Detail & Related papers (2024-10-19T09:08:52Z) - Poisson Process for Bayesian Optimization [126.51200593377739]
We propose a ranking-based surrogate model based on the Poisson process and introduce an efficient BO framework, namely Poisson Process Bayesian Optimization (PoPBO)
Compared to the classic GP-BO method, our PoPBO has lower costs and better robustness to noise, which is verified by abundant experiments.
arXiv Detail & Related papers (2024-02-05T02:54:50Z) - Continuous optimization by quantum adaptive distribution search [0.7332146059733189]
We introduce the quantum adaptive distribution search (QuADS)
QuADS integrates Grover adaptive search (GAS) with the covariance matrix adaptation - evolution strategy (CMA-ES)
numerical experiments show that QuADS outperforms both GAS and CMA-ES.
arXiv Detail & Related papers (2023-11-29T04:48:09Z) - Generalizing Bayesian Optimization with Decision-theoretic Entropies [102.82152945324381]
We consider a generalization of Shannon entropy from work in statistical decision theory.
We first show that special cases of this entropy lead to popular acquisition functions used in BO procedures.
We then show how alternative choices for the loss yield a flexible family of acquisition functions.
arXiv Detail & Related papers (2022-10-04T04:43:58Z) - Capacity Optimality of OAMP in Coded Large Unitarily Invariant Systems [9.101719525164803]
We investigate a unitarily invariant system (LUIS) involving a unitarily invariant sensing matrix, an arbitrary fixed signal distribution, and forward error control (FEC) coding.
We show that OAMP with the optimized codes has significant performance improvement over the un-optimized ones and the well-known Turbo linear MMSE algorithm.
arXiv Detail & Related papers (2022-06-23T13:11:20Z) - Enhancing Explainability of Hyperparameter Optimization via Bayesian
Algorithm Execution [13.037647287689438]
We study the combination of HPO with interpretable machine learning (IML) methods such as partial dependence plots.
We propose a modified HPO method which efficiently searches for optimum global predictive performance.
Our method returns more reliable explanations of the underlying black-box without a loss of optimization performance.
arXiv Detail & Related papers (2022-06-11T07:12:04Z) - Towards Learning Universal Hyperparameter Optimizers with Transformers [57.35920571605559]
We introduce the OptFormer, the first text-based Transformer HPO framework that provides a universal end-to-end interface for jointly learning policy and function prediction.
Our experiments demonstrate that the OptFormer can imitate at least 7 different HPO algorithms, which can be further improved via its function uncertainty estimates.
arXiv Detail & Related papers (2022-05-26T12:51:32Z) - Trusted-Maximizers Entropy Search for Efficient Bayesian Optimization [39.824086260578646]
This paper presents a novel trusted-maximizers entropy search (TES) acquisition function.
It measures how much an input contributes to the information gain on a query over a finite set of trusted maximizers.
arXiv Detail & Related papers (2021-07-30T07:25:07Z) - Permutation Invariant Policy Optimization for Mean-Field Multi-Agent
Reinforcement Learning: A Principled Approach [128.62787284435007]
We propose the mean-field proximal policy optimization (MF-PPO) algorithm, at the core of which is a permutation-invariant actor-critic neural architecture.
We prove that MF-PPO attains the globally optimal policy at a sublinear rate of convergence.
In particular, we show that the inductive bias introduced by the permutation-invariant neural architecture enables MF-PPO to outperform existing competitors.
arXiv Detail & Related papers (2021-05-18T04:35:41Z) - Adaptive pruning-based optimization of parameterized quantum circuits [62.997667081978825]
Variisy hybrid quantum-classical algorithms are powerful tools to maximize the use of Noisy Intermediate Scale Quantum devices.
We propose a strategy for such ansatze used in variational quantum algorithms, which we call "Efficient Circuit Training" (PECT)
Instead of optimizing all of the ansatz parameters at once, PECT launches a sequence of variational algorithms.
arXiv Detail & Related papers (2020-10-01T18:14:11Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.