Exploiting Local Optimality in Metaheuristic Search
- URL: http://arxiv.org/abs/2010.05394v3
- Date: Wed, 21 Oct 2020 13:45:42 GMT
- Title: Exploiting Local Optimality in Metaheuristic Search
- Authors: Fred Glover
- Abstract summary: We introduce strategies to identify and take advantage of useful features of solution history with an adaptive memory metaheuristic.
Our approach uses a new type of adaptive memory based on a construction called exponential extrapolation.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: A variety of strategies have been proposed for overcoming local optimality in
metaheuristic search. This paper examines characteristics of moves that can be
exploited to make good decisions about steps that lead away from a local
optimum and then lead toward a new local optimum. We introduce strategies to
identify and take advantage of useful features of solution history with an
adaptive memory metaheuristic, to provide rules for selecting moves that offer
promise for discovering improved local optima.
Our approach uses a new type of adaptive memory based on a construction
called exponential extrapolation. The memory operates by means of threshold
inequalities that ensure selected moves will not lead to a specified number of
most recently encountered local optima. Associated thresholds are embodied in
choice rule strategies that further exploit the exponential extrapolation
concept. Together these produce a threshold based Alternating Ascent (AA)
algorithm that opens a variety of research possibilities for exploration.
Related papers
- An incremental preference elicitation-based approach to learning potentially non-monotonic preferences in multi-criteria sorting [53.36437745983783]
We first construct a max-margin optimization-based model to model potentially non-monotonic preferences.
We devise information amount measurement methods and question selection strategies to pinpoint the most informative alternative in each iteration.
Two incremental preference elicitation-based algorithms are developed to learn potentially non-monotonic preferences.
arXiv Detail & Related papers (2024-09-04T14:36:20Z) - Discovering Preference Optimization Algorithms with and for Large Language Models [50.843710797024805]
offline preference optimization is a key method for enhancing and controlling the quality of Large Language Model (LLM) outputs.
We perform objective discovery to automatically discover new state-of-the-art preference optimization algorithms without (expert) human intervention.
Experiments demonstrate the state-of-the-art performance of DiscoPOP, a novel algorithm that adaptively blends logistic and exponential losses.
arXiv Detail & Related papers (2024-06-12T16:58:41Z) - Localized Zeroth-Order Prompt Optimization [54.964765668688806]
We propose a novel algorithm, namely localized zeroth-order prompt optimization (ZOPO)
ZOPO incorporates a Neural Tangent Kernel-based derived Gaussian process into standard zeroth-order optimization for an efficient search of well-performing local optima in prompt optimization.
Remarkably, ZOPO outperforms existing baselines in terms of both the optimization performance and the query efficiency.
arXiv Detail & Related papers (2024-03-05T14:18:15Z) - Learning Regions of Interest for Bayesian Optimization with Adaptive
Level-Set Estimation [84.0621253654014]
We propose a framework, called BALLET, which adaptively filters for a high-confidence region of interest.
We show theoretically that BALLET can efficiently shrink the search space, and can exhibit a tighter regret bound than standard BO.
arXiv Detail & Related papers (2023-07-25T09:45:47Z) - The Behavior and Convergence of Local Bayesian Optimization [20.568490114736818]
Local optimization strategies can deliver strong empirical performance on high-dimensional problems compared to traditional global strategies.
We first study the behavior of the local approach, and find that the statistics of individual local solutions of Gaussian process sample paths are surprisingly good compared to what we would expect to recover from global methods.
arXiv Detail & Related papers (2023-05-24T21:11:49Z) - Online Joint Assortment-Inventory Optimization under MNL Choices [14.530542487845732]
We study an online joint assortment-inventory optimization problem, in which we assume that the choice behavior of each customer follows the Multinomial Logit (MNL) choice model.
We propose a novel algorithm that can effectively balance the exploration and exploitation in the online decision-making of assortment and inventory.
arXiv Detail & Related papers (2023-04-04T09:25:34Z) - Adversarial Option-Aware Hierarchical Imitation Learning [89.92994158193237]
We propose Option-GAIL, a novel method to learn skills at long horizon.
The key idea of Option-GAIL is modeling the task hierarchy by options and train the policy via generative adversarial optimization.
Experiments show that Option-GAIL outperforms other counterparts consistently across a variety of tasks.
arXiv Detail & Related papers (2021-06-10T06:42:05Z) - Approximation Guarantees of Local Search Algorithms via Localizability
of Set Functions [9.89901717499058]
Local search is a basic algorithm and is widely used for various optimization problems.
We provide approximation guarantees of standard local search algorithms under various constraints in terms of localizability.
The main application of our framework is sparse optimization, for which we show that restricted strong concavity and restricted smoothness of the objective function imply localizability.
arXiv Detail & Related papers (2020-06-02T05:37:52Z) - Localized active learning of Gaussian process state space models [63.97366815968177]
A globally accurate model is not required to achieve good performance in many common control applications.
We propose an active learning strategy for Gaussian process state space models that aims to obtain an accurate model on a bounded subset of the state-action space.
By employing model predictive control, the proposed technique integrates information collected during exploration and adaptively improves its exploration strategy.
arXiv Detail & Related papers (2020-05-04T05:35:02Z) - Learning to be Global Optimizer [28.88646928299302]
We learn an optimal network and escaping capability algorithm for some benchmark functions.
We show that the learned algorithm significantly outperforms some well-known classical optimization algorithms.
arXiv Detail & Related papers (2020-03-10T03:46:25Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.