Random is Faster than Systematic in Multi-Objective Local Search
- URL: http://arxiv.org/abs/2601.06318v1
- Date: Fri, 09 Jan 2026 21:27:30 GMT
- Title: Random is Faster than Systematic in Multi-Objective Local Search
- Authors: Zimin Liang, Miqing Li,
- Abstract summary: In multi-objective local search algorithms, a common practice is to maintain an archive of all non-dominated solutions found so far.<n>A central issue in this process is how to explore the neighbourhood of a selected solution.<n>We show that the random sampling method is consistently faster than the systematic exploration method across a range of multi-objective problems.
- Score: 8.143459364469708
- License: http://creativecommons.org/licenses/by-sa/4.0/
- Abstract: Local search is a fundamental method in operations research and combinatorial optimisation. It has been widely applied to a variety of challenging problems, including multi-objective optimisation where multiple, often conflicting, objectives need to be simultaneously considered. In multi-objective local search algorithms, a common practice is to maintain an archive of all non-dominated solutions found so far, from which the algorithm iteratively samples a solution to explore its neighbourhood. A central issue in this process is how to explore the neighbourhood of a selected solution. In general, there are two main approaches: 1) systematic exploration and 2) random sampling. The former systematically explores the solution's neighbours until a stopping condition is met -- for example, when the neighbourhood is exhausted (i.e., the best improvement strategy) or once a better solution is found (i.e., first improvement). In contrast, the latter randomly selects and evaluates only one neighbour of the solution. One may think systematic exploration may be more efficient, as it prevents from revisiting the same neighbours multiple times. In this paper, however, we show that this may not be the case. We first empirically demonstrate that the random sampling method is consistently faster than the systematic exploration method across a range of multi-objective problems. We then give an intuitive explanation for this phenomenon using toy examples, showing that the superior performance of the random sampling method relies on the distribution of ``good neighbours''. Next, we show that the number of such neighbours follows a certain probability distribution during the search. Lastly, building on this distribution, we provide a theoretical insight for why random sampling is more efficient than systematic exploration, regardless of whether the best improvement or first improvement strategy is used.
Related papers
- Variable Search Stepsize for Randomized Local Search in Multi-Objective Combinatorial Optimization [12.70422671595511]
We present a simple yet effective local search method, called variable stepsize randomized local search (VS-RLS)<n> VS-RLS transitions gradually from a broad, exploratory search in the early phases to a more focused, fine-grained search as the search progresses.<n>We demonstrate the effectiveness and generalizability of VS-RLS through extensive evaluations against local search and multi-objective evolutionary algorithms.
arXiv Detail & Related papers (2026-02-05T13:59:05Z) - A Review on Single-Problem Multi-Attempt Heuristic Optimization [6.778082328635129]
In certain real-world optimization scenarios, practitioners are not interested in solving multiple problems but rather in finding the best solution to a single, specific problem.<n>When the computational budget is large relative to the cost of evaluating a candidate solution, multiple alternatives can be tried to solve the same given problem.<n>The sequential selection of which alternative to try next is crucial for efficiently identifying the one that provides the best possible solution.
arXiv Detail & Related papers (2025-09-30T14:28:28Z) - Training Greedy Policy for Proposal Batch Selection in Expensive Multi-Objective Combinatorial Optimization [52.80408805368928]
We introduce a novel greedy-style subset selection algorithm for batch acquisition.
Our experiments on the red fluorescent proteins show that our proposed method achieves the baseline performance in 1.69x fewer queries.
arXiv Detail & Related papers (2024-06-21T05:57:08Z) - Dual-Directed Algorithm Design for Efficient Pure Exploration [9.728332815218181]
We develop a new design principle for pure-exploration problems that extends the top-two approach beyond best-arm identification.<n>We prove that, when combined with Information-Directed Selection, top-two Thompson sampling attains optimality for best-arm identification.<n>We also produce optimal algorithms for thresholding bandits and $varepsilon$-best-arm identification.
arXiv Detail & Related papers (2023-10-30T07:29:17Z) - Discretize Relaxed Solution of Spectral Clustering via a Non-Heuristic
Algorithm [77.53604156112144]
We develop a first-order term to bridge the original problem and discretization algorithm.
Since the non-heuristic method is aware of the original graph cut problem, the final discrete solution is more reliable.
arXiv Detail & Related papers (2023-10-19T13:57:38Z) - Fast Re-Optimization of LeadingOnes with Frequent Changes [0.9281671380673306]
We show that the re-optimization approach suggested by Doerr et al. reaches a limit when the problem instances are prone to more frequent changes.
We propose a modification of their algorithm which interpolates between greedy search around the previous-best and the current-best solution.
arXiv Detail & Related papers (2022-09-09T16:51:41Z) - Learning Proximal Operators to Discover Multiple Optima [66.98045013486794]
We present an end-to-end method to learn the proximal operator across non-family problems.
We show that for weakly-ized objectives and under mild conditions, the method converges globally.
arXiv Detail & Related papers (2022-01-28T05:53:28Z) - Frequent Itemset-driven Search for Finding Minimum Node Separators in
Complex Networks [61.2383572324176]
We propose a frequent itemset-driven search approach, which integrates the concept of frequent itemset mining in data mining into the well-known memetic search framework.
It iteratively employs the frequent itemset recombination operator to generate promising offspring solution based on itemsets that frequently occur in high-quality solutions.
In particular, it discovers 29 new upper bounds and matches 18 previous best-known bounds.
arXiv Detail & Related papers (2022-01-18T11:16:40Z) - Local policy search with Bayesian optimization [73.0364959221845]
Reinforcement learning aims to find an optimal policy by interaction with an environment.
Policy gradients for local search are often obtained from random perturbations.
We develop an algorithm utilizing a probabilistic model of the objective function and its gradient.
arXiv Detail & Related papers (2021-06-22T16:07:02Z) - Stochastic Saddle-Point Optimization for Wasserstein Barycenters [69.68068088508505]
We consider the populationimation barycenter problem for random probability measures supported on a finite set of points and generated by an online stream of data.
We employ the structure of the problem and obtain a convex-concave saddle-point reformulation of this problem.
In the setting when the distribution of random probability measures is discrete, we propose an optimization algorithm and estimate its complexity.
arXiv Detail & Related papers (2020-06-11T19:40:38Z) - A flexible outlier detector based on a topology given by graph
communities [0.0]
anomaly detection is essential for optimal performance of machine learning methods and statistical predictive models.
Topology is computed using the communities of a weighted graph codifying mutual nearest neighbors in the feature space.
Our approach overall outperforms, both, local and global strategies in multi and single view settings.
arXiv Detail & Related papers (2020-02-18T18:40:31Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.