Superior Genetic Algorithms for the Target Set Selection Problem Based on Power-Law Parameter Choices and Simple Greedy Heuristics
- URL: http://arxiv.org/abs/2404.04018v1
- Date: Fri, 5 Apr 2024 11:02:02 GMT
- Title: Superior Genetic Algorithms for the Target Set Selection Problem Based on Power-Law Parameter Choices and Simple Greedy Heuristics
- Authors: Benjamin Doerr, Martin S. Krejca, Nguyen Vu,
- Abstract summary: We show that a biased random-key genetic algorithm (BRKGA) with two simple modifications obtains significantly better results.
We then add a natural greedy, namely to repeatedly discard small-degree vertices that are not necessary for reaching the whole graph.
The resulting algorithm consistently outperforms all of the state-of-the-art algorithms.
- Score: 9.044970217182117
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: The target set selection problem (TSS) asks for a set of vertices such that an influence spreading process started in these vertices reaches the whole graph. The current state of the art for this NP-hard problem are three recently proposed randomized search heuristics, namely a biased random-key genetic algorithm (BRKGA) obtained from extensive parameter tuning, a max-min ant system (MMAS), and a MMAS using Q-learning with a graph convolutional network. We show that the BRKGA with two simple modifications and without the costly parameter tuning obtains significantly better results. Our first modification is to simply choose all parameters of the BRKGA in each iteration randomly from a power-law distribution. The resulting parameterless BRKGA is already competitive with the tuned BRKGA, as our experiments on the previously used benchmarks show. We then add a natural greedy heuristic, namely to repeatedly discard small-degree vertices that are not necessary for reaching the whole graph. The resulting algorithm consistently outperforms all of the state-of-the-art algorithms. Besides providing a superior algorithm for the TSS problem, this work shows that randomized parameter choices and elementary greedy heuristics can give better results than complex algorithms and costly parameter tuning.
Related papers
- Gaussian Boson Sampling to Accelerate NP-Complete Vertex-Minor Graph
Classification [0.9935277311162707]
We propose a hybrid quantum-classical algorithm for the NP-complete problem of determining if two graphs are minor of one another.
We find a graph embedding that allows trading between the one-shot classification accuracy and the amount of input squeezing.
We introduce a new classical algorithm based on graph spectra, which we show outperforms various well-known graph-similarity algorithms.
arXiv Detail & Related papers (2024-02-05T21:24:11Z) - Combinatorial Stochastic-Greedy Bandit [79.1700188160944]
We propose a novelgreedy bandit (SGB) algorithm for multi-armed bandit problems when no extra information other than the joint reward of the selected set of $n$ arms at each time $tin [T]$ is observed.
SGB adopts an optimized-explore-then-commit approach and is specifically designed for scenarios with a large set of base arms.
arXiv Detail & Related papers (2023-12-13T11:08:25Z) - The First Proven Performance Guarantees for the Non-Dominated Sorting
Genetic Algorithm II (NSGA-II) on a Combinatorial Optimization Problem [6.793248433673384]
The Non-dominated Sorting Genetic Algorithm-II (NSGA-II) is one of the most prominent algorithms to solve multi-objective optimization problems.
We give the first proven performance guarantees for a classic optimization problem, the NP-complete bi-objective minimum spanning tree problem.
arXiv Detail & Related papers (2023-05-22T19:59:19Z) - Massively Parallel Genetic Optimization through Asynchronous Propagation
of Populations [50.591267188664666]
Propulate is an evolutionary optimization algorithm and software package for global optimization.
We provide an MPI-based implementation of our algorithm, which features variants of selection, mutation, crossover, and migration.
We find that Propulate is up to three orders of magnitude faster without sacrificing solution accuracy.
arXiv Detail & Related papers (2023-01-20T18:17:34Z) - Iterative-Free Quantum Approximate Optimization Algorithm Using Neural
Networks [20.051757447006043]
We propose a practical method that uses a simple, fully connected neural network to find better parameters tailored to a new given problem instance.
Our method is consistently the fastest to converge while also the best final result.
arXiv Detail & Related papers (2022-08-21T14:05:11Z) - LAWS: Look Around and Warm-Start Natural Gradient Descent for Quantum
Neural Networks [11.844238544360149]
Vari quantum algorithms (VQAs) have recently received significant attention due to their promising performance in Noisy Intermediate-Scale Quantum computers (NISQ)
VQAs run on parameterized quantum circuits (PQC) with randomlyational parameters are characterized by barren plateaus (BP) where the gradient vanishes exponentially in the number of qubits.
In this paper, we first quantum natural gradient (QNG), which is one of the most popular algorithms used in VQA, from the classical first-order point of optimization.
Then, we proposed a underlineAround underline
arXiv Detail & Related papers (2022-05-05T14:16:40Z) - Regret Bounds for Expected Improvement Algorithms in Gaussian Process
Bandit Optimization [63.8557841188626]
The expected improvement (EI) algorithm is one of the most popular strategies for optimization under uncertainty.
We propose a variant of EI with a standard incumbent defined via the GP predictive mean.
We show that our algorithm converges, and achieves a cumulative regret bound of $mathcal O(gamma_TsqrtT)$.
arXiv Detail & Related papers (2022-03-15T13:17:53Z) - Twisted hybrid algorithms for combinatorial optimization [68.8204255655161]
Proposed hybrid algorithms encode a cost function into a problem Hamiltonian and optimize its energy by varying over a set of states with low circuit complexity.
We show that for levels $p=2,ldots, 6$, the level $p$ can be reduced by one while roughly maintaining the expected approximation ratio.
arXiv Detail & Related papers (2022-03-01T19:47:16Z) - Lazy Parameter Tuning and Control: Choosing All Parameters Randomly From
a Power-Law Distribution [8.34061303235504]
Most evolutionary algorithms have multiple parameters and their values drastically affect the performance.
We propose a lazy but effective solution, namely choosing all parameter values in each iteration randomly from a suitably scaled power-law distribution.
We prove a performance guarantee that is comparable to the best performance known for static parameters.
arXiv Detail & Related papers (2021-04-14T09:17:18Z) - Adaptive Sampling for Best Policy Identification in Markov Decision
Processes [79.4957965474334]
We investigate the problem of best-policy identification in discounted Markov Decision (MDPs) when the learner has access to a generative model.
The advantages of state-of-the-art algorithms are discussed and illustrated.
arXiv Detail & Related papers (2020-09-28T15:22:24Z) - Convergence of adaptive algorithms for weakly convex constrained
optimization [59.36386973876765]
We prove the $mathcaltilde O(t-1/4)$ rate of convergence for the norm of the gradient of Moreau envelope.
Our analysis works with mini-batch size of $1$, constant first and second order moment parameters, and possibly smooth optimization domains.
arXiv Detail & Related papers (2020-06-11T17:43:19Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.