Local Latin Hypercube Refinement for Multi-objective Design Uncertainty
Optimization
- URL: http://arxiv.org/abs/2108.08890v1
- Date: Thu, 19 Aug 2021 19:46:38 GMT
- Title: Local Latin Hypercube Refinement for Multi-objective Design Uncertainty
Optimization
- Authors: Can Bogoclu, Dirk Roos, Tamara Nestorovi\'c
- Abstract summary: We propose a sequential sampling strategy for the surrogate based solution of robust design optimization problems.
Proposed local Latin hypercube refinement (LoLHR) strategy is model-agnostic and can be combined with any surrogate model.
LoLHR achieves on average better results compared to other surrogate based strategies on the tested examples.
- Score: 0.5156484100374058
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Optimizing the reliability and the robustness of a design is important but
often unaffordable due to high sample requirements. Surrogate models based on
statistical and machine learning methods are used to increase the sample
efficiency. However, for higher dimensional or multi-modal systems, surrogate
models may also require a large amount of samples to achieve good results. We
propose a sequential sampling strategy for the surrogate based solution of
multi-objective reliability based robust design optimization problems. Proposed
local Latin hypercube refinement (LoLHR) strategy is model-agnostic and can be
combined with any surrogate model because there is no free lunch but possibly a
budget one. The proposed method is compared to stationary sampling as well as
other proposed strategies from the literature. Gaussian process and support
vector regression are both used as surrogate models. Empirical evidence is
presented, showing that LoLHR achieves on average better results compared to
other surrogate based strategies on the tested examples.
Related papers
- Preference Optimization with Multi-Sample Comparisons [53.02717574375549]
We introduce a novel approach that extends post-training to include multi-sample comparisons.
These approaches fail to capture critical characteristics such as generative diversity and bias.
We demonstrate that multi-sample comparison is more effective in optimizing collective characteristics than single-sample comparison.
arXiv Detail & Related papers (2024-10-16T00:59:19Z) - Model-Free Active Exploration in Reinforcement Learning [53.786439742572995]
We study the problem of exploration in Reinforcement Learning and present a novel model-free solution.
Our strategy is able to identify efficient policies faster than state-of-the-art exploration approaches.
arXiv Detail & Related papers (2024-06-30T19:00:49Z) - Regression-aware Inference with LLMs [52.764328080398805]
We show that an inference strategy can be sub-optimal for common regression and scoring evaluation metrics.
We propose alternate inference strategies that estimate the Bayes-optimal solution for regression and scoring metrics in closed-form from sampled responses.
arXiv Detail & Related papers (2024-03-07T03:24:34Z) - Optimal Budgeted Rejection Sampling for Generative Models [54.050498411883495]
Rejection sampling methods have been proposed to improve the performance of discriminator-based generative models.
We first propose an Optimal Budgeted Rejection Sampling scheme that is provably optimal.
Second, we propose an end-to-end method that incorporates the sampling scheme into the training procedure to further enhance the model's overall performance.
arXiv Detail & Related papers (2023-11-01T11:52:41Z) - Gradient and Uncertainty Enhanced Sequential Sampling for Global Fit [0.0]
This paper proposes a new sampling strategy for global fit called Gradient and Uncertainty Enhanced Sequential Sampling (GUESS)
We show that GUESS achieved on average the highest sample efficiency compared to other surrogate-based strategies on the tested examples.
arXiv Detail & Related papers (2023-09-29T19:49:39Z) - Robust Model-Based Optimization for Challenging Fitness Landscapes [96.63655543085258]
Protein design involves optimization on a fitness landscape.
Leading methods are challenged by sparsity of high-fitness samples in the training set.
We show that this problem of "separation" in the design space is a significant bottleneck in existing model-based optimization tools.
We propose a new approach that uses a novel VAE as its search model to overcome the problem.
arXiv Detail & Related papers (2023-05-23T03:47:32Z) - Optimizing Hyperparameters with Conformal Quantile Regression [7.316604052864345]
We propose to leverage conformalized quantile regression which makes minimal assumptions about the observation noise.
This translates to quicker HPO convergence on empirical benchmarks.
arXiv Detail & Related papers (2023-05-05T15:33:39Z) - General multi-fidelity surrogate models: Framework and active learning
strategies for efficient rare event simulation [1.708673732699217]
Estimating the probability of failure for complex real-world systems is often prohibitively expensive.
This paper presents a robust multi-fidelity surrogate modeling strategy.
It is shown to be highly accurate while drastically reducing the number of high-fidelity model calls.
arXiv Detail & Related papers (2022-12-07T00:03:21Z) - DeepAL for Regression Using $\epsilon$-weighted Hybrid Query Strategy [0.799536002595393]
We propose a novel sampling technique by combining the active learning (AL) method with Deep Learning (DL)
We call this method $epsilon$-weighted hybrid query strategy ($epsilon$-HQS).
During the empirical evaluation, better accuracy of the surrogate was observed in comparison to other methods of sample selection.
arXiv Detail & Related papers (2022-06-24T14:38:05Z) - Sample-Efficient Reinforcement Learning via Conservative Model-Based
Actor-Critic [67.00475077281212]
Model-based reinforcement learning algorithms are more sample efficient than their model-free counterparts.
We propose a novel approach that achieves high sample efficiency without the strong reliance on accurate learned models.
We show that CMBAC significantly outperforms state-of-the-art approaches in terms of sample efficiency on several challenging tasks.
arXiv Detail & Related papers (2021-12-16T15:33:11Z) - Variational Inference with NoFAS: Normalizing Flow with Adaptive
Surrogate for Computationally Expensive Models [7.217783736464403]
Use of sampling-based approaches such as Markov chain Monte Carlo may become intractable when each likelihood evaluation is computationally expensive.
New approaches combining variational inference with normalizing flow are characterized by a computational cost that grows only linearly with the dimensionality of the latent variable space.
We propose Normalizing Flow with Adaptive Surrogate (NoFAS), an optimization strategy that alternatively updates the normalizing flow parameters and the weights of a neural network surrogate model.
arXiv Detail & Related papers (2021-08-28T14:31:45Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.