Can Global Optimization Strategy Outperform Myopic Strategy for Bayesian
Parameter Estimation?
- URL: http://arxiv.org/abs/2007.00373v1
- Date: Wed, 1 Jul 2020 10:31:16 GMT
- Title: Can Global Optimization Strategy Outperform Myopic Strategy for Bayesian
Parameter Estimation?
- Authors: Juanping Zhu, Hairong Gu
- Abstract summary: This paper provides a discouraging answer based on experimental simulations comparing the performance improvement and burden between global and myopic strategies.
The added horizon in global strategies has negligible contributions to the improvement of optimal global utility other than the most immediate next steps.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Bayesian adaptive inference is widely used in psychophysics to estimate
psychometric parameters. Most applications used myopic one-step ahead strategy
which only optimizes the immediate utility. The widely held expectation is that
global optimization strategies that explicitly optimize over some horizon can
largely improve the performance of the myopic strategy. With limited studies
that compared myopic and global strategies, the expectation was not challenged
and researchers are still investing heavily to achieve global optimization. Is
that really worthwhile? This paper provides a discouraging answer based on
experimental simulations comparing the performance improvement and computation
burden between global and myopic strategies in parameter estimation of multiple
models. The finding is that the added horizon in global strategies has
negligible contributions to the improvement of optimal global utility other
than the most immediate next steps (of myopic strategy). Mathematical recursion
is derived to prove that the contribution of utility improvement of each added
horizon step diminishes fast as that step moves further into the future.
Related papers
- Nonmyopic Global Optimisation via Approximate Dynamic Programming [14.389086937116582]
We introduce novel nonmyopic acquisition strategies tailored to IDW- and RBF-based global optimisation.
Specifically, we develop dynamic programming-based paradigms, including rollout and multi-step scenario-based optimisation schemes.
arXiv Detail & Related papers (2024-12-06T09:25:00Z) - Localized Zeroth-Order Prompt Optimization [54.964765668688806]
We propose a novel algorithm, namely localized zeroth-order prompt optimization (ZOPO)
ZOPO incorporates a Neural Tangent Kernel-based derived Gaussian process into standard zeroth-order optimization for an efficient search of well-performing local optima in prompt optimization.
Remarkably, ZOPO outperforms existing baselines in terms of both the optimization performance and the query efficiency.
arXiv Detail & Related papers (2024-03-05T14:18:15Z) - An Invariant Information Geometric Method for High-Dimensional Online
Optimization [9.538618632613714]
We introduce a full invariance oriented evolution strategies algorithm, derived from its corresponding framework.
We benchmark SynCMA against leading algorithms in Bayesian optimization and evolution strategies.
In all scenarios, SynCMA demonstrates great competence, if not dominance, over other algorithms in sample efficiency.
arXiv Detail & Related papers (2024-01-03T07:06:26Z) - Iterative Layerwise Training for Quantum Approximate Optimization
Algorithm [0.39945675027960637]
The capability of the quantum approximate optimization algorithm (QAOA) in solving the optimization problems has been intensively studied in recent years.
We propose the iterative layerwise optimization strategy and explore the possibility for the reduction of optimization cost in solving problems with QAOA.
arXiv Detail & Related papers (2023-09-24T05:12:48Z) - DeepHive: A multi-agent reinforcement learning approach for automated
discovery of swarm-based optimization policies [0.0]
The state of each agent within the swarm is defined as its current position and function value within a design space.
The proposed approach is tested on various benchmark optimization functions and compared to the performance of other global optimization strategies.
arXiv Detail & Related papers (2023-03-29T18:08:08Z) - An Empirical Evaluation of Zeroth-Order Optimization Methods on
AI-driven Molecule Optimization [78.36413169647408]
We study the effectiveness of various ZO optimization methods for optimizing molecular objectives.
We show the advantages of ZO sign-based gradient descent (ZO-signGD)
We demonstrate the potential effectiveness of ZO optimization methods on widely used benchmark tasks from the Guacamol suite.
arXiv Detail & Related papers (2022-10-27T01:58:10Z) - Understanding the Effect of Stochasticity in Policy Optimization [86.7574122154668]
We show that the preferability of optimization methods depends critically on whether exact gradients are used.
Second, to explain these findings we introduce the concept of committal rate for policy optimization.
Third, we show that in the absence of external oracle information, there is an inherent trade-off between exploiting geometry to accelerate convergence versus achieving optimality almost surely.
arXiv Detail & Related papers (2021-10-29T06:35:44Z) - Global Optimization of Gaussian processes [52.77024349608834]
We propose a reduced-space formulation with trained Gaussian processes trained on few data points.
The approach also leads to significantly smaller and computationally cheaper sub solver for lower bounding.
In total, we reduce time convergence by orders of orders of the proposed method.
arXiv Detail & Related papers (2020-05-21T20:59:11Z) - Mixed Strategies for Robust Optimization of Unknown Objectives [93.8672371143881]
We consider robust optimization problems, where the goal is to optimize an unknown objective function against the worst-case realization of an uncertain parameter.
We design a novel sample-efficient algorithm GP-MRO, which sequentially learns about the unknown objective from noisy point evaluations.
GP-MRO seeks to discover a robust and randomized mixed strategy, that maximizes the worst-case expected objective value.
arXiv Detail & Related papers (2020-02-28T09:28:17Z) - Adaptivity of Stochastic Gradient Methods for Nonconvex Optimization [71.03797261151605]
Adaptivity is an important yet under-studied property in modern optimization theory.
Our algorithm is proved to achieve the best-available convergence for non-PL objectives simultaneously while outperforming existing algorithms for PL objectives.
arXiv Detail & Related papers (2020-02-13T05:42:27Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.