Towards Explainable Metaheuristic: Mining Surrogate Fitness Models for
Importance of Variables
- URL: http://arxiv.org/abs/2206.14135v1
- Date: Tue, 31 May 2022 09:16:18 GMT
- Title: Towards Explainable Metaheuristic: Mining Surrogate Fitness Models for
Importance of Variables
- Authors: Manjinder Singh, Alexander E.I. Brownlee, David Cairns
- Abstract summary: We use four benchmark problems to train a surrogate model and investigate the learning of the search space by the surrogate model.
We show that the surrogate model picks out key characteristics of the problem as it is trained on population data from each generation.
- Score: 69.02115180674885
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Metaheuristic search algorithms look for solutions that either maximise or
minimise a set of objectives, such as cost or performance. However most
real-world optimisation problems consist of nonlinear problems with complex
constraints and conflicting objectives. The process by which a GA arrives at a
solution remains largely unexplained to the end-user. A poorly understood
solution will dent the confidence a user has in the arrived at solution. We
propose that investigation of the variables that strongly influence solution
quality and their relationship would be a step toward providing an explanation
of the near-optimal solution presented by a metaheuristic. Through the use of
four benchmark problems we use the population data generated by a Genetic
Algorithm (GA) to train a surrogate model, and investigate the learning of the
search space by the surrogate model. We compare what the surrogate has learned
after being trained on population data generated after the first generation and
contrast this with a surrogate model trained on the population data from all
generations. We show that the surrogate model picks out key characteristics of
the problem as it is trained on population data from each generation. Through
mining the surrogate model we can build a picture of the learning process of a
GA, and thus an explanation of the solution presented by the GA. The aim being
to build trust and confidence in the end-user about the solution presented by
the GA, and encourage adoption of the model.
Related papers
- Task Groupings Regularization: Data-Free Meta-Learning with Heterogeneous Pre-trained Models [83.02797560769285]
Data-Free Meta-Learning (DFML) aims to derive knowledge from a collection of pre-trained models without accessing their original data.
Current methods often overlook the heterogeneity among pre-trained models, which leads to performance degradation due to task conflicts.
We propose Task Groupings Regularization, a novel approach that benefits from model heterogeneity by grouping and aligning conflicting tasks.
arXiv Detail & Related papers (2024-05-26T13:11:55Z) - Large Language Model-Aided Evolutionary Search for Constrained Multiobjective Optimization [15.476478159958416]
We employ a large language model (LLM) to enhance evolutionary search for solving constrained multi-objective optimization problems.
Our aim is to speed up the convergence of the evolutionary population.
arXiv Detail & Related papers (2024-05-09T13:44:04Z) - V-STaR: Training Verifiers for Self-Taught Reasoners [75.11811592995176]
We propose V-STaR that utilizes both the correct and incorrect solutions generated during the self-improvement process to train a verifier.
V-STaR delivers a 4% to 17% test accuracy improvement over existing self-improvement and verification approaches.
arXiv Detail & Related papers (2024-02-09T15:02:56Z) - Autoinverse: Uncertainty Aware Inversion of Neural Networks [22.759930986110625]
We propose Autoinverse, a highly automated approach for inverting neural network surrogates.
Our main insight is to seek inverse solutions in the vicinity of reliable data which have been sampled form the forward process.
We verify our proposed method through addressing a set of real-world problems in control, fabrication, and design.
arXiv Detail & Related papers (2022-08-29T12:09:32Z) - DRFLM: Distributionally Robust Federated Learning with Inter-client
Noise via Local Mixup [58.894901088797376]
federated learning has emerged as a promising approach for training a global model using data from multiple organizations without leaking their raw data.
We propose a general framework to solve the above two challenges simultaneously.
We provide comprehensive theoretical analysis including robustness analysis, convergence analysis, and generalization ability.
arXiv Detail & Related papers (2022-04-16T08:08:29Z) - A Mutual Information Maximization Approach for the Spurious Solution
Problem in Weakly Supervised Question Answering [60.768146126094955]
Weakly supervised question answering usually has only the final answers as supervision signals.
There may exist many spurious solutions that coincidentally derive the correct answer, but training on such solutions can hurt model performance.
We propose to explicitly exploit such semantic correlations by maximizing the mutual information between question-answer pairs and predicted solutions.
arXiv Detail & Related papers (2021-06-14T05:47:41Z) - Sequential Transfer in Reinforcement Learning with a Generative Model [48.40219742217783]
We show how to reduce the sample complexity for learning new tasks by transferring knowledge from previously-solved ones.
We derive PAC bounds on its sample complexity which clearly demonstrate the benefits of using this kind of prior knowledge.
We empirically verify our theoretical findings in simple simulated domains.
arXiv Detail & Related papers (2020-07-01T19:53:35Z) - Surrogate Assisted Evolutionary Algorithm for Medium Scale Expensive
Multi-Objective Optimisation Problems [4.338938227238059]
Building a surrogate model of an objective function has shown to be effective to assist evolutionary algorithms (EAs) to solve real-world complex optimisation problems.
We propose a Gaussian process surrogate model assisted EA for medium-scale expensive multi-objective optimisation problems with up to 50 decision variables.
The effectiveness of our proposed algorithm is validated on benchmark problems with 10, 20, 50 variables, comparing with three state-of-the-art SAEAs.
arXiv Detail & Related papers (2020-02-08T12:06:08Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.