Towards Explainable Metaheuristic: Mining Surrogate Fitness Models for
Importance of Variables
- URL: http://arxiv.org/abs/2206.14135v1
- Date: Tue, 31 May 2022 09:16:18 GMT
- Title: Towards Explainable Metaheuristic: Mining Surrogate Fitness Models for
Importance of Variables
- Authors: Manjinder Singh, Alexander E.I. Brownlee, David Cairns
- Abstract summary: We use four benchmark problems to train a surrogate model and investigate the learning of the search space by the surrogate model.
We show that the surrogate model picks out key characteristics of the problem as it is trained on population data from each generation.
- Score: 69.02115180674885
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Metaheuristic search algorithms look for solutions that either maximise or
minimise a set of objectives, such as cost or performance. However most
real-world optimisation problems consist of nonlinear problems with complex
constraints and conflicting objectives. The process by which a GA arrives at a
solution remains largely unexplained to the end-user. A poorly understood
solution will dent the confidence a user has in the arrived at solution. We
propose that investigation of the variables that strongly influence solution
quality and their relationship would be a step toward providing an explanation
of the near-optimal solution presented by a metaheuristic. Through the use of
four benchmark problems we use the population data generated by a Genetic
Algorithm (GA) to train a surrogate model, and investigate the learning of the
search space by the surrogate model. We compare what the surrogate has learned
after being trained on population data generated after the first generation and
contrast this with a surrogate model trained on the population data from all
generations. We show that the surrogate model picks out key characteristics of
the problem as it is trained on population data from each generation. Through
mining the surrogate model we can build a picture of the learning process of a
GA, and thus an explanation of the solution presented by the GA. The aim being
to build trust and confidence in the end-user about the solution presented by
the GA, and encourage adoption of the model.
Related papers
- Diffusion Models as Network Optimizers: Explorations and Analysis [71.69869025878856]
generative diffusion models (GDMs) have emerged as a promising new approach to network optimization.
In this study, we first explore the intrinsic characteristics of generative models.
We provide a concise theoretical and intuitive demonstration of the advantages of generative models over discriminative network optimization.
arXiv Detail & Related papers (2024-11-01T09:05:47Z) - Comparative study of regression vs pairwise models for surrogate-based heuristic optimisation [1.2535250082638645]
This paper addresses the formulation of surrogate problems as both regression models that approximate fitness (surface surrogate models) and a novel way to connect classification models (pairwise surrogate models)
The performance of the overall search, when using online machine learning-based surrogate models, depends not only on the accuracy of the predictive model but also on the kind of bias towards positive or negative cases.
arXiv Detail & Related papers (2024-10-04T13:19:06Z) - DiffSG: A Generative Solver for Network Optimization with Diffusion Model [75.27274046562806]
Diffusion generative models can consider a broader range of solutions and exhibit stronger generalization by learning parameters.
We propose a new framework, which leverages intrinsic distribution learning of diffusion generative models to learn high-quality solutions.
arXiv Detail & Related papers (2024-08-13T07:56:21Z) - V-STaR: Training Verifiers for Self-Taught Reasoners [71.53113558733227]
V-STaR trains a verifier using DPO that judges correctness of model-generated solutions.
Running V-STaR for multiple iterations results in progressively better reasoners and verifiers.
arXiv Detail & Related papers (2024-02-09T15:02:56Z) - Self-Labeling the Job Shop Scheduling Problem [15.723699332053558]
We show that generative models can be trained by sampling multiple solutions and using the best one according to the problem objective as a pseudo-label.
We prove the robustness of SLIM to various parameters and its generality by applying it to the Traveling Salesman Problem.
arXiv Detail & Related papers (2024-01-22T11:08:36Z) - Autoinverse: Uncertainty Aware Inversion of Neural Networks [22.759930986110625]
We propose Autoinverse, a highly automated approach for inverting neural network surrogates.
Our main insight is to seek inverse solutions in the vicinity of reliable data which have been sampled form the forward process.
We verify our proposed method through addressing a set of real-world problems in control, fabrication, and design.
arXiv Detail & Related papers (2022-08-29T12:09:32Z) - A Mutual Information Maximization Approach for the Spurious Solution
Problem in Weakly Supervised Question Answering [60.768146126094955]
Weakly supervised question answering usually has only the final answers as supervision signals.
There may exist many spurious solutions that coincidentally derive the correct answer, but training on such solutions can hurt model performance.
We propose to explicitly exploit such semantic correlations by maximizing the mutual information between question-answer pairs and predicted solutions.
arXiv Detail & Related papers (2021-06-14T05:47:41Z) - Sequential Transfer in Reinforcement Learning with a Generative Model [48.40219742217783]
We show how to reduce the sample complexity for learning new tasks by transferring knowledge from previously-solved ones.
We derive PAC bounds on its sample complexity which clearly demonstrate the benefits of using this kind of prior knowledge.
We empirically verify our theoretical findings in simple simulated domains.
arXiv Detail & Related papers (2020-07-01T19:53:35Z) - Surrogate Assisted Evolutionary Algorithm for Medium Scale Expensive
Multi-Objective Optimisation Problems [4.338938227238059]
Building a surrogate model of an objective function has shown to be effective to assist evolutionary algorithms (EAs) to solve real-world complex optimisation problems.
We propose a Gaussian process surrogate model assisted EA for medium-scale expensive multi-objective optimisation problems with up to 50 decision variables.
The effectiveness of our proposed algorithm is validated on benchmark problems with 10, 20, 50 variables, comparing with three state-of-the-art SAEAs.
arXiv Detail & Related papers (2020-02-08T12:06:08Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.