Sample-and-Bound for Non-Convex Optimization
- URL: http://arxiv.org/abs/2401.04812v3
- Date: Tue, 20 Feb 2024 00:18:16 GMT
- Title: Sample-and-Bound for Non-Convex Optimization
- Authors: Yaoguang Zhai, Zhizhen Qin, Sicun Gao
- Abstract summary: We propose new sampling methods for non-dimensional objective optimization that adapts Monte Carlo benchmarks to improve efficiency.
We evaluate the proposed high-order baseline and competitive benchmarks algorithms aggressively.
- Score: 18.30858789210194
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Standard approaches for global optimization of non-convex functions, such as
branch-and-bound, maintain partition trees to systematically prune the domain.
The tree size grows exponentially in the number of dimensions. We propose new
sampling-based methods for non-convex optimization that adapts Monte Carlo Tree
Search (MCTS) to improve efficiency. Instead of the standard use of visitation
count in Upper Confidence Bounds, we utilize numerical overapproximations of
the objective as an uncertainty metric, and also take into account of sampled
estimates of first-order and second-order information. The Monte Carlo tree in
our approach avoids the usual fixed combinatorial patterns in growing the tree,
and aggressively zooms into the promising regions, while still balancing
exploration and exploitation. We evaluate the proposed algorithms on
high-dimensional non-convex optimization benchmarks against competitive
baselines and analyze the effects of the hyper parameters.
Related papers
- Model-based Causal Bayesian Optimization [78.120734120667]
We propose model-based causal Bayesian optimization (MCBO)
MCBO learns a full system model instead of only modeling intervention-reward pairs.
Unlike in standard Bayesian optimization, our acquisition function cannot be evaluated in closed form.
arXiv Detail & Related papers (2022-11-18T14:28:21Z) - Tree ensemble kernels for Bayesian optimization with known constraints
over mixed-feature spaces [54.58348769621782]
Tree ensembles can be well-suited for black-box optimization tasks such as algorithm tuning and neural architecture search.
Two well-known challenges in using tree ensembles for black-box optimization are (i) effectively quantifying model uncertainty for exploration and (ii) optimizing over the piece-wise constant acquisition function.
Our framework performs as well as state-of-the-art methods for unconstrained black-box optimization over continuous/discrete features and outperforms competing methods for problems combining mixed-variable feature spaces and known input constraints.
arXiv Detail & Related papers (2022-07-02T16:59:37Z) - From Trees to Continuous Embeddings and Back: Hyperbolic Hierarchical
Clustering [33.000371053304676]
We present the first continuous relaxation of Dasgupta's discrete optimization problem with provable quality guarantees.
We show that even approximate solutions found with gradient descent have superior quality than agglomerative clusterings.
We also highlight the flexibility of HypHC using end-to-end training in a downstream classification task.
arXiv Detail & Related papers (2020-10-01T13:43:19Z) - Optimal Decision Trees for Nonlinear Metrics [42.18286681448184]
We show a novel algorithm for producing optimal trees for nonlinear metrics.
To the best of our knowledge, this is the first method to compute provably optimal decision trees for nonlinear metrics.
Our approach leads to a trade-off when compared to optimising linear metrics.
arXiv Detail & Related papers (2020-09-15T08:30:56Z) - Stochastic Optimization Forests [60.523606291705214]
We show how to train forest decision policies by growing trees that choose splits to directly optimize the downstream decision quality, rather than splitting to improve prediction accuracy as in the standard random forest algorithm.
We show that our approximate splitting criteria can reduce running time hundredfold, while achieving performance close to forest algorithms that exactly re-optimize for every candidate split.
arXiv Detail & Related papers (2020-08-17T16:56:06Z) - MurTree: Optimal Classification Trees via Dynamic Programming and Search [61.817059565926336]
We present a novel algorithm for learning optimal classification trees based on dynamic programming and search.
Our approach uses only a fraction of the time required by the state-of-the-art and can handle datasets with tens of thousands of instances.
arXiv Detail & Related papers (2020-07-24T17:06:55Z) - Additive Tree-Structured Covariance Function for Conditional Parameter
Spaces in Bayesian Optimization [34.89735938765757]
We generalize the additive assumption to tree-structured functions.
By incorporating the structure information of parameter spaces and the additive assumption in the BO loop, we develop a parallel algorithm to optimize the acquisition function.
arXiv Detail & Related papers (2020-06-21T11:21:55Z) - Generalized and Scalable Optimal Sparse Decision Trees [56.35541305670828]
We present techniques that produce optimal decision trees over a variety of objectives.
We also introduce a scalable algorithm that produces provably optimal results in the presence of continuous variables.
arXiv Detail & Related papers (2020-06-15T19:00:11Z) - ENTMOOT: A Framework for Optimization over Ensemble Tree Models [57.98561336670884]
ENTMOOT is a framework for integrating tree models into larger optimization problems.
We show how ENTMOOT allows a simple integration of tree models into decision-making and black-box optimization.
arXiv Detail & Related papers (2020-03-10T14:34:07Z) - Bayesian optimization for backpropagation in Monte-Carlo tree search [1.52292571922932]
We present two methods, Softmax MCTS and Monotone MCTS, which generalize previous attempts to improve upon the backpropagation strategy.
We conduct experiments on computer Go, where the returns are given by a deep value neural network, and show that our proposed framework outperforms previous methods.
arXiv Detail & Related papers (2020-01-25T14:33:38Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.