On the development of a practical Bayesian optimisation algorithm for
expensive experiments and simulations with changing environmental conditions
- URL: http://arxiv.org/abs/2402.03006v1
- Date: Mon, 5 Feb 2024 13:46:04 GMT
- Title: On the development of a practical Bayesian optimisation algorithm for
expensive experiments and simulations with changing environmental conditions
- Authors: Mike Diessner, Kevin J. Wilson, Richard D. Whalley
- Abstract summary: This article extends Bayesian optimisation to the optimisation of systems in changing environments.
The proposed algorithm is applied to a wind farm simulator with eight controllable and one environmental parameter.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Experiments in engineering are typically conducted in controlled environments
where parameters can be set to any desired value. This assumes that the same
applies in a real-world setting -- an assumption that is often incorrect as
many experiments are influenced by uncontrollable environmental conditions such
as temperature, humidity and wind speed. When optimising such experiments, the
focus should lie on finding optimal values conditionally on these
uncontrollable variables. This article extends Bayesian optimisation to the
optimisation of systems in changing environments that include controllable and
uncontrollable parameters. The extension fits a global surrogate model over all
controllable and environmental variables but optimises only the controllable
parameters conditional on measurements of the uncontrollable variables. The
method is validated on two synthetic test functions and the effects of the
noise level, the number of the environmental parameters, the parameter
fluctuation, the variability of the uncontrollable parameters, and the
effective domain size are investigated. ENVBO, the proposed algorithm resulting
from this investigation, is applied to a wind farm simulator with eight
controllable and one environmental parameter. ENVBO finds solutions for the
full domain of the environmental variable that outperforms results from
optimisation algorithms that only focus on a fixed environmental value in all
but one case while using a fraction of their evaluation budget. This makes the
proposed approach very sample-efficient and cost-effective. An off-the-shelf
open-source version of ENVBO is available via the NUBO Python package.
Related papers
- Continual Adaptation: Environment-Conditional Parameter Generation for Object Detection in Dynamic Scenarios [54.58186816693791]
environments constantly change over time and space, posing significant challenges for object detectors trained based on a closed-set assumption.<n>We propose a new mechanism, converting the fine-tuning process to a specific- parameter generation.<n>In particular, we first design a dual-path LoRA-based domain-aware adapter that disentangles features into domain-invariant and domain-specific components.
arXiv Detail & Related papers (2025-06-30T17:14:12Z) - Certifiably Robust Policies for Uncertain Parametric Environments [57.2416302384766]
We propose a framework based on parametric Markov decision processes (MDPs) with unknown distributions over parameters.
We learn and analyse IMDPs for a set of unknown sample environments induced by parameters.
We show that our approach produces tight bounds on a policy's performance with high confidence.
arXiv Detail & Related papers (2024-08-06T10:48:15Z) - ETHER: Efficient Finetuning of Large-Scale Models with Hyperplane Reflections [59.839926875976225]
We propose the ETHER transformation family, which performs Efficient fineTuning via HypErplane Reflections.
In particular, we introduce ETHER and its relaxation ETHER+, which match or outperform existing PEFT methods with significantly fewer parameters.
arXiv Detail & Related papers (2024-05-30T17:26:02Z) - A Unified Gaussian Process for Branching and Nested Hyperparameter
Optimization [19.351804144005744]
In deep learning, tuning parameters with conditional dependence are common in practice.
New GP model accounts for the dependent structure among input variables through a new kernel function.
High prediction accuracy and better optimization efficiency are observed in a series of synthetic simulations and real data applications of neural networks.
arXiv Detail & Related papers (2024-01-19T21:11:32Z) - Winning Prize Comes from Losing Tickets: Improve Invariant Learning by
Exploring Variant Parameters for Out-of-Distribution Generalization [76.27711056914168]
Out-of-Distribution (OOD) Generalization aims to learn robust models that generalize well to various environments without fitting to distribution-specific features.
Recent studies based on Lottery Ticket Hypothesis (LTH) address this problem by minimizing the learning target to find some of the parameters that are critical to the task.
We propose Exploring Variant parameters for Invariant Learning (EVIL) which also leverages the distribution knowledge to find the parameters that are sensitive to distribution shift.
arXiv Detail & Related papers (2023-10-25T06:10:57Z) - On the Effectiveness of Parameter-Efficient Fine-Tuning [79.6302606855302]
Currently, many research works propose to only fine-tune a small portion of the parameters while keeping most of the parameters shared across different tasks.
We show that all of the methods are actually sparse fine-tuned models and conduct a novel theoretical analysis of them.
Despite the effectiveness of sparsity grounded by our theory, it still remains an open problem of how to choose the tunable parameters.
arXiv Detail & Related papers (2022-11-28T17:41:48Z) - Environment Optimization for Multi-Agent Navigation [11.473177123332281]
The goal of this paper is to consider the environment as a decision variable in a system-level optimization problem.
We show, through formal proofs, under which conditions the environment can change while guaranteeing completeness.
In order to accommodate a broad range of implementation scenarios, we include both online and offline optimization, and both discrete and continuous environment representations.
arXiv Detail & Related papers (2022-09-22T19:22:16Z) - On Controller Tuning with Time-Varying Bayesian Optimization [74.57758188038375]
We will use time-varying optimization (TVBO) to tune controllers online in changing environments using appropriate prior knowledge on the control objective and its changes.
We propose a novel TVBO strategy using Uncertainty-Injection (UI), which incorporates the assumption of incremental and lasting changes.
Our model outperforms the state-of-the-art method in TVBO, exhibiting reduced regret and fewer unstable parameter configurations.
arXiv Detail & Related papers (2022-07-22T14:54:13Z) - Tuning Particle Accelerators with Safety Constraints using Bayesian
Optimization [73.94660141019764]
tuning machine parameters of particle accelerators is a repetitive and time-consuming task.
We propose and evaluate a step size-limited variant of safe Bayesian optimization.
arXiv Detail & Related papers (2022-03-26T02:21:03Z) - Bayesian Optimization for Distributionally Robust Chance-constrained
Problem [23.73485391229763]
Chance-constrained (CC) problem, the problem of maximizing the expected value under a certain level of constraint satisfaction probability, is one of the practically important problems in the presence of environmental variables.
We show that the proposed method can find an arbitrary accurate solution with high probability in a finite number of trials, and confirm the usefulness of the proposed method through numerical experiments.
arXiv Detail & Related papers (2022-01-31T10:43:58Z) - Bayesian Quadrature Optimization for Probability Threshold Robustness
Measure [23.39754660544729]
In many product development problems, the performance of the product is governed by two types of parameters called design parameter and environmental parameter.
We formulate this practical problem as active learning (AL) problems and propose efficient algorithms with theoretically guaranteed performance.
arXiv Detail & Related papers (2020-06-22T03:17:10Z) - Online Parameter Estimation for Safety-Critical Systems with Gaussian
Processes [6.122161391301866]
We present a Bayesian optimization framework based on Gaussian processes (GPs) for online parameter estimation.
It uses an efficient search strategy over a response surface in the parameter space for finding the global optima with minimal function evaluations.
We demonstrate our technique on an actuated planar pendulum and safety-critical quadrotor in simulation with changing parameters.
arXiv Detail & Related papers (2020-02-18T20:38:00Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.