A Review on Single-Problem Multi-Attempt Heuristic Optimization
- URL: http://arxiv.org/abs/2509.26321v1
- Date: Tue, 30 Sep 2025 14:28:28 GMT
- Title: A Review on Single-Problem Multi-Attempt Heuristic Optimization
- Authors: Judith Echevarrieta, Etor Arza, Aritz Pérez, Josu Ceberio,
- Abstract summary: In certain real-world optimization scenarios, practitioners are not interested in solving multiple problems but rather in finding the best solution to a single, specific problem.<n>When the computational budget is large relative to the cost of evaluating a candidate solution, multiple alternatives can be tried to solve the same given problem.<n>The sequential selection of which alternative to try next is crucial for efficiently identifying the one that provides the best possible solution.
- Score: 6.778082328635129
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In certain real-world optimization scenarios, practitioners are not interested in solving multiple problems but rather in finding the best solution to a single, specific problem. When the computational budget is large relative to the cost of evaluating a candidate solution, multiple heuristic alternatives can be tried to solve the same given problem, each possibly with a different algorithm, parameter configuration, initialization, or stopping criterion. The sequential selection of which alternative to try next is crucial for efficiently identifying the one that provides the best possible solution across multiple attempts. Despite the relevance of this problem in practice, it has not yet been the exclusive focus of any existing review. Several sequential alternative selection strategies have been proposed in different research topics, but they have not been comprehensively and systematically unified under a common perspective. This work presents a focused review of single-problem multi-attempt heuristic optimization. It brings together suitable strategies to this problem that have been studied separately through algorithm selection, parameter tuning, multi-start and resource allocation. These strategies are explained using a unified terminology within a common framework, which supports the development of a taxonomy for systematically organizing and classifying them.
Related papers
- Evaluation of Multi- and Single-objective Learning Algorithms for Imbalanced Data [0.0]
Machine learning tasks aim to find models that work well not for a single, but for a group of criteria, often opposing ones.<n>One solution is to propose an aggregate learning criterion and reduce the multi-objective learning task to a single-criteria optimization problem.<n>This article proposes a new, reliable way of evaluating algorithms based on multi-objective algorithms with methods that return single solutions.
arXiv Detail & Related papers (2025-11-15T12:54:17Z) - Greedy Restart Schedules: A Baseline for Dynamic Algorithm Selection on Numerical Black-box Optimization Problems [0.0]
We present a scheduling approach that iteratively selects the algorithm performing best on the distribution of unsolved training problems at time of selection, resulting in a problem-independent solver schedule.<n>We demonstrate our approach using well-knowns from numerical black-box optimization on the BBOB testbed, bridging much of the gap between single and virtual best solver from the original portfolio across various evaluation protocols.
arXiv Detail & Related papers (2025-04-15T17:54:21Z) - Pareto Optimal Algorithmic Recourse in Multi-cost Function [0.44938884406455726]
algorithmic recourse aims to identify minimal-cost actions to alter an individual features, thereby obtaining a desired outcome.<n>Most current recourse mechanisms use gradient-based methods that assume cost functions are differentiable, often not applicable in real-world scenarios.<n>This work proposes an algorithmic recourse framework that handles nondifferentiable and discrete multi-cost functions.
arXiv Detail & Related papers (2025-02-11T03:16:08Z) - Runtime Analysis of Evolutionary Algorithms for Multiparty Multiobjective Optimization [0.8520624117635325]
This paper presents the first theoretical analysis of the expected runtime of evolutionary algorithms on bi-party multi-objective optimization problems (BPMOPs)<n>Our findings demonstrate that employing traditional multi-objective optimization algorithms to solve MPMOPs is both time-consuming and inefficient.<n>We propose co-evolutionary multi-party multi-objectives (CoEMPMO) for pseudo-Boolean optimization and shortest path problems within a multi-party multi-objective context.
arXiv Detail & Related papers (2025-01-09T13:16:08Z) - Learning Multiple Initial Solutions to Optimization Problems [52.9380464408756]
Sequentially solving similar optimization problems under strict runtime constraints is essential for many applications.<n>We propose learning to predict emphmultiple diverse initial solutions given parameters that define the problem instance.<n>We find significant and consistent improvement with our method across all evaluation settings and demonstrate that it efficiently scales with the number of initial solutions required.
arXiv Detail & Related papers (2024-11-04T15:17:19Z) - Learning Joint Models of Prediction and Optimization [56.04498536842065]
Predict-Then-Then framework uses machine learning models to predict unknown parameters of an optimization problem from features before solving.
This paper proposes an alternative method, in which optimal solutions are learned directly from the observable features by joint predictive models.
arXiv Detail & Related papers (2024-09-07T19:52:14Z) - Training Greedy Policy for Proposal Batch Selection in Expensive Multi-Objective Combinatorial Optimization [52.80408805368928]
We introduce a novel greedy-style subset selection algorithm for batch acquisition.
Our experiments on the red fluorescent proteins show that our proposed method achieves the baseline performance in 1.69x fewer queries.
arXiv Detail & Related papers (2024-06-21T05:57:08Z) - Socio-cognitive Optimization of Time-delay Control Problems using
Evolutionary Metaheuristics [89.24951036534168]
Metaheuristics are universal optimization algorithms which should be used for solving difficult problems, unsolvable by classic approaches.
In this paper we aim at constructing novel socio-cognitive metaheuristic based on castes, and apply several versions of this algorithm to optimization of time-delay system model.
arXiv Detail & Related papers (2022-10-23T22:21:10Z) - Enhanced Opposition Differential Evolution Algorithm for Multimodal
Optimization [0.2538209532048866]
Most of the real-world problems are multimodal in nature that consists of multiple optimum values.
Classical gradient-based methods fail for optimization problems in which the objective functions are either discontinuous or non-differentiable.
We have proposed an algorithm known as Enhanced Opposition Differential Evolution (EODE) algorithm to solve the MMOPs.
arXiv Detail & Related papers (2022-08-23T16:18:27Z) - ARES: An Efficient Algorithm with Recurrent Evaluation and Sampling-Driven Inference for Maximum Independent Set [48.57120672468062]
This paper introduces an efficient algorithm for the Maximum Independent Set (MIS) problem, incorporating two innovative techniques.<n>The proposed algorithm outperforms state-of-the-art algorithms in terms of solution quality, computational efficiency, and stability.
arXiv Detail & Related papers (2022-08-16T14:39:38Z) - Learning Proximal Operators to Discover Multiple Optima [66.98045013486794]
We present an end-to-end method to learn the proximal operator across non-family problems.
We show that for weakly-ized objectives and under mild conditions, the method converges globally.
arXiv Detail & Related papers (2022-01-28T05:53:28Z) - Diversity in Kemeny Rank Aggregation: A Parameterized Approach [3.6603644500568806]
A recent trend in artificial intelligence, called solution diversity, has focused on the development of notions of optimality.
In this work, we investigate the impact of this combination in the field of Kemeny Rank Aggregation.
Our main results work both when considering the traditional setting of aggregation over linearly ordered votes, and in the more general setting where votes are partially ordered.
arXiv Detail & Related papers (2021-05-19T21:50:03Z) - Pareto Multi-Task Learning [53.90732663046125]
Multi-task learning is a powerful method for solving multiple correlated tasks simultaneously.
It is often impossible to find one single solution to optimize all the tasks, since different tasks might conflict with each other.
Recently, a novel method is proposed to find one single Pareto optimal solution with good trade-off among different tasks by casting multi-task learning as multiobjective optimization.
arXiv Detail & Related papers (2019-12-30T08:58:40Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.