Exploring the Feature Space of TSP Instances Using Quality Diversity
- URL: http://arxiv.org/abs/2202.02077v2
- Date: Tue, 12 Apr 2022 08:06:49 GMT
- Title: Exploring the Feature Space of TSP Instances Using Quality Diversity
- Authors: Jakob Bossek, Frank Neumann
- Abstract summary: We present a new approach based on quality diversity (QD) that is able to explore the whole feature space.
QD algorithms allow to create solutions of high quality within a given feature space by splitting it up into boxes and improving solution quality within each box.
- Score: 13.264683014487376
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Generating instances of different properties is key to algorithm selection
methods that differentiate between the performance of different solvers for a
given combinatorial optimization problem. A wide range of methods using
evolutionary computation techniques has been introduced in recent years. With
this paper, we contribute to this area of research by providing a new approach
based on quality diversity (QD) that is able to explore the whole feature
space. QD algorithms allow to create solutions of high quality within a given
feature space by splitting it up into boxes and improving solution quality
within each box. We use our QD approach for the generation of TSP instances to
visualize and analyze the variety of instances differentiating various TSP
solvers and compare it to instances generated by a $(\mu+1)$-EA for TSP
instance generation.
Related papers
- Large Language Models as In-context AI Generators for Quality-Diversity [8.585387103144825]
In-context QD aims to generate interesting solutions using few-shot and many-shot prompting with quality-diverse examples from the QD archive as context.
In-context QD displays promising results compared to both QD baselines and similar strategies developed for single-objective optimization.
arXiv Detail & Related papers (2024-04-24T10:35:36Z) - Optimizing Solution-Samplers for Combinatorial Problems: The Landscape
of Policy-Gradient Methods [52.0617030129699]
We introduce a novel theoretical framework for analyzing the effectiveness of DeepMatching Networks and Reinforcement Learning methods.
Our main contribution holds for a broad class of problems including Max-and Min-Cut, Max-$k$-Bipartite-Bi, Maximum-Weight-Bipartite-Bi, and Traveling Salesman Problem.
As a byproduct of our analysis we introduce a novel regularization process over vanilla descent and provide theoretical and experimental evidence that it helps address vanishing-gradient issues and escape bad stationary points.
arXiv Detail & Related papers (2023-10-08T23:39:38Z) - A Pareto-optimal compositional energy-based model for sampling and
optimization of protein sequences [55.25331349436895]
Deep generative models have emerged as a popular machine learning-based approach for inverse problems in the life sciences.
These problems often require sampling new designs that satisfy multiple properties of interest in addition to learning the data distribution.
arXiv Detail & Related papers (2022-10-19T19:04:45Z) - Analysis of Quality Diversity Algorithms for the Knapsack Problem [14.12876643502492]
We apply the QD paradigm to simulate dynamic programming behaviours on knapsack problem.
We show that they are able to compute an optimal solution within expected pseudo-polynomial time.
arXiv Detail & Related papers (2022-07-28T12:15:33Z) - Evolutionary Diversity Optimisation for The Traveling Thief Problem [11.590506672325668]
We introduce a bi-level evolutionary algorithm to maximise the structural diversity of the set of solutions.
We empirically determine the best method to obtain diversity.
Our experimental results show a significant improvement of the QD approach in terms of structural diversity for most TTP benchmark instances.
arXiv Detail & Related papers (2022-04-06T10:13:55Z) - Stochastic Gradient Descent-Ascent: Unified Theory and New Efficient
Methods [73.35353358543507]
Gradient Descent-Ascent (SGDA) is one of the most prominent algorithms for solving min-max optimization and variational inequalities problems (VIP)
In this paper, we propose a unified convergence analysis that covers a large variety of descent-ascent methods.
We develop several new variants of SGDA such as a new variance-reduced method (L-SVRGDA), new distributed methods with compression (QSGDA, DIANA-SGDA, VR-DIANA-SGDA), and a new method with coordinate randomization (SEGA-SGDA)
arXiv Detail & Related papers (2022-02-15T09:17:39Z) - Expressivity of Parameterized and Data-driven Representations in Quality
Diversity Search [111.06379262544911]
We compare the output diversity of a quality diversity evolutionary search performed in two different search spaces.
A learned model is better at interpolating between known data points than at extrapolating or expanding towards unseen examples.
arXiv Detail & Related papers (2021-05-10T10:27:43Z) - Ensemble Feature Extraction for Multi-Container Quality-Diversity
Algorithms [0.2741266294612775]
Quality-Diversity algorithms search for large collections of diverse and high-performing solutions.
We describe MC-AURORA, a Quality-Diversity approach that optimises simultaneously several collections of solutions.
We show that this approach produces solutions that are more diverse than those produced by single-representation approaches.
arXiv Detail & Related papers (2021-05-03T08:35:00Z) - Isometric Multi-Shape Matching [50.86135294068138]
Finding correspondences between shapes is a fundamental problem in computer vision and graphics.
While isometries are often studied in shape correspondence problems, they have not been considered explicitly in the multi-matching setting.
We present a suitable optimisation algorithm for solving our formulation and provide a convergence and complexity analysis.
arXiv Detail & Related papers (2020-12-04T15:58:34Z) - BOP-Elites, a Bayesian Optimisation algorithm for Quality-Diversity
search [0.0]
We propose the Bayesian optimisation of Elites (BOP-Elites) algorithm.
By considering user defined regions of the feature space as 'niches' our task is to find the optimal solution in each niche.
The resulting algorithm is very effective in identifying the parts of the search space that belong to a niche in feature space, and finding the optimal solution in each niche.
arXiv Detail & Related papers (2020-05-08T23:49:13Z) - GACEM: Generalized Autoregressive Cross Entropy Method for Multi-Modal
Black Box Constraint Satisfaction [69.94831587339539]
We present a modified Cross-Entropy Method (CEM) that uses a masked auto-regressive neural network for modeling uniform distributions over the solution space.
Our algorithm is able to express complicated solution spaces, thus allowing it to track a variety of different solution regions.
arXiv Detail & Related papers (2020-02-17T20:21:20Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.