Gradient and Uncertainty Enhanced Sequential Sampling for Global Fit
- URL: http://arxiv.org/abs/2310.00110v1
- Date: Fri, 29 Sep 2023 19:49:39 GMT
- Title: Gradient and Uncertainty Enhanced Sequential Sampling for Global Fit
- Authors: Sven L\"ammle, Can Bogoclu, Kevin Cremanns, Dirk Roos
- Abstract summary: This paper proposes a new sampling strategy for global fit called Gradient and Uncertainty Enhanced Sequential Sampling (GUESS)
We show that GUESS achieved on average the highest sample efficiency compared to other surrogate-based strategies on the tested examples.
- Score: 0.0
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Surrogate models based on machine learning methods have become an important
part of modern engineering to replace costly computer simulations. The data
used for creating a surrogate model are essential for the model accuracy and
often restricted due to cost and time constraints. Adaptive sampling strategies
have been shown to reduce the number of samples needed to create an accurate
model. This paper proposes a new sampling strategy for global fit called
Gradient and Uncertainty Enhanced Sequential Sampling (GUESS). The acquisition
function uses two terms: the predictive posterior uncertainty of the surrogate
model for exploration of unseen regions and a weighted approximation of the
second and higher-order Taylor expansion values for exploitation. Although
various sampling strategies have been proposed so far, the selection of a
suitable method is not trivial. Therefore, we compared our proposed strategy to
9 adaptive sampling strategies for global surrogate modeling, based on 26
different 1 to 8-dimensional deterministic benchmarks functions. Results show
that GUESS achieved on average the highest sample efficiency compared to other
surrogate-based strategies on the tested examples. An ablation study
considering the behavior of GUESS in higher dimensions and the importance of
surrogate choice is also presented.
Related papers
- An incremental preference elicitation-based approach to learning potentially non-monotonic preferences in multi-criteria sorting [53.36437745983783]
We first construct a max-margin optimization-based model to model potentially non-monotonic preferences.
We devise information amount measurement methods and question selection strategies to pinpoint the most informative alternative in each iteration.
Two incremental preference elicitation-based algorithms are developed to learn potentially non-monotonic preferences.
arXiv Detail & Related papers (2024-09-04T14:36:20Z) - Model-Free Active Exploration in Reinforcement Learning [53.786439742572995]
We study the problem of exploration in Reinforcement Learning and present a novel model-free solution.
Our strategy is able to identify efficient policies faster than state-of-the-art exploration approaches.
arXiv Detail & Related papers (2024-06-30T19:00:49Z) - Self-Supervised Dataset Distillation for Transfer Learning [77.4714995131992]
We propose a novel problem of distilling an unlabeled dataset into a set of small synthetic samples for efficient self-supervised learning (SSL)
We first prove that a gradient of synthetic samples with respect to a SSL objective in naive bilevel optimization is textitbiased due to randomness originating from data augmentations or masking.
We empirically validate the effectiveness of our method on various applications involving transfer learning.
arXiv Detail & Related papers (2023-10-10T10:48:52Z) - Towards Automated Imbalanced Learning with Deep Hierarchical
Reinforcement Learning [57.163525407022966]
Imbalanced learning is a fundamental challenge in data mining, where there is a disproportionate ratio of training samples in each class.
Over-sampling is an effective technique to tackle imbalanced learning through generating synthetic samples for the minority class.
We propose AutoSMOTE, an automated over-sampling algorithm that can jointly optimize different levels of decisions.
arXiv Detail & Related papers (2022-08-26T04:28:01Z) - Spatially-Varying Bayesian Predictive Synthesis for Flexible and
Interpretable Spatial Prediction [6.07227513262407]
We propose a novel methodology that captures spatially-varying model uncertainty, which we call spatial Bayesian predictive synthesis.
We show that our proposed spatial Bayesian predictive synthesis outperforms standard spatial models and advanced machine learning methods.
arXiv Detail & Related papers (2022-03-10T07:16:29Z) - Sample-Efficient Reinforcement Learning via Conservative Model-Based
Actor-Critic [67.00475077281212]
Model-based reinforcement learning algorithms are more sample efficient than their model-free counterparts.
We propose a novel approach that achieves high sample efficiency without the strong reliance on accurate learned models.
We show that CMBAC significantly outperforms state-of-the-art approaches in terms of sample efficiency on several challenging tasks.
arXiv Detail & Related papers (2021-12-16T15:33:11Z) - Improving Hyperparameter Optimization by Planning Ahead [3.8673630752805432]
We propose a novel transfer learning approach, defined within the context of model-based reinforcement learning.
We propose a new variant of model predictive control which employs a simple look-ahead strategy as a policy.
Our experiments on three meta-datasets comparing to state-of-the-art HPO algorithms show that the proposed method can outperform all baselines.
arXiv Detail & Related papers (2021-10-15T11:46:14Z) - Variational Inference with NoFAS: Normalizing Flow with Adaptive
Surrogate for Computationally Expensive Models [7.217783736464403]
Use of sampling-based approaches such as Markov chain Monte Carlo may become intractable when each likelihood evaluation is computationally expensive.
New approaches combining variational inference with normalizing flow are characterized by a computational cost that grows only linearly with the dimensionality of the latent variable space.
We propose Normalizing Flow with Adaptive Surrogate (NoFAS), an optimization strategy that alternatively updates the normalizing flow parameters and the weights of a neural network surrogate model.
arXiv Detail & Related papers (2021-08-28T14:31:45Z) - Local Latin Hypercube Refinement for Multi-objective Design Uncertainty
Optimization [0.5156484100374058]
We propose a sequential sampling strategy for the surrogate based solution of robust design optimization problems.
Proposed local Latin hypercube refinement (LoLHR) strategy is model-agnostic and can be combined with any surrogate model.
LoLHR achieves on average better results compared to other surrogate based strategies on the tested examples.
arXiv Detail & Related papers (2021-08-19T19:46:38Z) - Local policy search with Bayesian optimization [73.0364959221845]
Reinforcement learning aims to find an optimal policy by interaction with an environment.
Policy gradients for local search are often obtained from random perturbations.
We develop an algorithm utilizing a probabilistic model of the objective function and its gradient.
arXiv Detail & Related papers (2021-06-22T16:07:02Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.