A Nested Weighted Tchebycheff Multi-Objective Bayesian Optimization
Approach for Flexibility of Unknown Utopia Estimation in Expensive Black-box
Design Problems
- URL: http://arxiv.org/abs/2110.11070v1
- Date: Sat, 16 Oct 2021 00:44:06 GMT
- Title: A Nested Weighted Tchebycheff Multi-Objective Bayesian Optimization
Approach for Flexibility of Unknown Utopia Estimation in Expensive Black-box
Design Problems
- Authors: Arpan Biswas, Claudio Fuentes, Christopher Hoyle
- Abstract summary: In existing work, a weighted Tchebycheff MOBO approach has been demonstrated which attempts to estimate the unknown utopia in formulating acquisition function.
We propose a nested weighted Tchebycheff MOBO framework where we build a regression model selection procedure from an ensemble of models.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: We propose a nested weighted Tchebycheff Multi-objective Bayesian
optimization framework where we build a regression model selection procedure
from an ensemble of models, towards better estimation of the uncertain
parameters of the weighted-Tchebycheff expensive black-box multi-objective
function. In existing work, a weighted Tchebycheff MOBO approach has been
demonstrated which attempts to estimate the unknown utopia in formulating
acquisition function, through calibration using a priori selected regression
model. However, the existing MOBO model lacks flexibility in selecting the
appropriate regression models given the guided sampled data and therefore, can
under-fit or over-fit as the iterations of the MOBO progress, reducing the
overall MOBO performance. As it is too complex to a priori guarantee a best
model in general, this motivates us to consider a portfolio of different
families of predictive models fitted with current training data, guided by the
WTB MOBO; the best model is selected following a user-defined prediction root
mean-square-error-based approach. The proposed approach is implemented in
optimizing a multi-modal benchmark problem and a thin tube design under
constant loading of temperature-pressure, with minimizing the risk of
creep-fatigue failure and design cost. Finally, the nested weighted Tchebycheff
MOBO model performance is compared with different MOBO frameworks with respect
to accuracy in parameter estimation, Pareto-optimal solutions and function
evaluation cost. This method is generalized enough to consider different
families of predictive models in the portfolio for best model selection, where
the overall design architecture allows for solving any high-dimensional
(multiple functions) complex black-box problems and can be extended to any
other global criterion multi-objective optimization methods where prior
knowledge of utopia is required.
Related papers
- MAP: Low-compute Model Merging with Amortized Pareto Fronts via Quadratic Approximation [80.47072100963017]
We introduce a novel and low-compute algorithm, Model Merging with Amortized Pareto Front (MAP)
MAP efficiently identifies a set of scaling coefficients for merging multiple models, reflecting the trade-offs involved.
We also introduce Bayesian MAP for scenarios with a relatively low number of tasks and Nested MAP for situations with a high number of tasks, further reducing the computational cost of evaluation.
arXiv Detail & Related papers (2024-06-11T17:55:25Z) - Diffusion Model for Data-Driven Black-Box Optimization [54.25693582870226]
We focus on diffusion models, a powerful generative AI technology, and investigate their potential for black-box optimization.
We study two practical types of labels: 1) noisy measurements of a real-valued reward function and 2) human preference based on pairwise comparisons.
Our proposed method reformulates the design optimization problem into a conditional sampling problem, which allows us to leverage the power of diffusion models.
arXiv Detail & Related papers (2024-03-20T00:41:12Z) - Action-State Dependent Dynamic Model Selection [6.5268245109828005]
A Reinforcement learning algorithm is used to approximate and estimate from the data the optimal solution to a dynamic programming problem.
A typical example is the one of switching between different portfolio models under rebalancing costs.
Using a set of macroeconomic variables and price data, an empirical application shows superior performance to choosing the best portfolio model with hindsight.
arXiv Detail & Related papers (2023-07-07T09:23:14Z) - When to Update Your Model: Constrained Model-based Reinforcement
Learning [50.74369835934703]
We propose a novel and general theoretical scheme for a non-decreasing performance guarantee of model-based RL (MBRL)
Our follow-up derived bounds reveal the relationship between model shifts and performance improvement.
A further example demonstrates that learning models from a dynamically-varying number of explorations benefit the eventual returns.
arXiv Detail & Related papers (2022-10-15T17:57:43Z) - Evaluating model-based planning and planner amortization for continuous
control [79.49319308600228]
We take a hybrid approach, combining model predictive control (MPC) with a learned model and model-free policy learning.
We find that well-tuned model-free agents are strong baselines even for high DoF control problems.
We show that it is possible to distil a model-based planner into a policy that amortizes the planning without any loss of performance.
arXiv Detail & Related papers (2021-10-07T12:00:40Z) - Approximate Bayesian Optimisation for Neural Networks [6.921210544516486]
A body of work has been done to automate machine learning algorithm to highlight the importance of model choice.
The necessity to solve the analytical tractability and the computational feasibility in a idealistic fashion enables to ensure the efficiency and the applicability.
arXiv Detail & Related papers (2021-08-27T19:03:32Z) - Personalizing Performance Regression Models to Black-Box Optimization
Problems [0.755972004983746]
In this work, we propose a personalized regression approach for numerical optimization problems.
We also investigate the impact of selecting not a single regression model per problem, but personalized ensembles.
We test our approach on predicting the performance of numerical optimizations on the BBOB benchmark collection.
arXiv Detail & Related papers (2021-04-22T11:47:47Z) - Modeling the Second Player in Distributionally Robust Optimization [90.25995710696425]
We argue for the use of neural generative models to characterize the worst-case distribution.
This approach poses a number of implementation and optimization challenges.
We find that the proposed approach yields models that are more robust than comparable baselines.
arXiv Detail & Related papers (2021-03-18T14:26:26Z) - On Statistical Efficiency in Learning [37.08000833961712]
We address the challenge of model selection to strike a balance between model fitting and model complexity.
We propose an online algorithm that sequentially expands the model complexity to enhance selection stability and reduce cost.
Experimental studies show that the proposed method has desirable predictive power and significantly less computational cost than some popular methods.
arXiv Detail & Related papers (2020-12-24T16:08:29Z) - Control as Hybrid Inference [62.997667081978825]
We present an implementation of CHI which naturally mediates the balance between iterative and amortised inference.
We verify the scalability of our algorithm on a continuous control benchmark, demonstrating that it outperforms strong model-free and model-based baselines.
arXiv Detail & Related papers (2020-07-11T19:44:09Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.