Multi-view Bayesian optimisation in reduced dimension for engineering design
- URL: http://arxiv.org/abs/2501.01552v1
- Date: Thu, 02 Jan 2025 22:03:00 GMT
- Title: Multi-view Bayesian optimisation in reduced dimension for engineering design
- Authors: Thomas A. Archbold, Ieva Kazlauskaite, Fehmi Cirak,
- Abstract summary: We introduce a multi-view learning strategy that considers both the input design variables and output data representing the objective or constraint functions.
Adopting a fully probabilistic viewpoint, we use probabilistic partial least squares (PPLS) to learn an orthogonal mapping from the design variables to the latent variables.
We compare the proposed probabilistic partial least squares Bayesian optimisation (PPLS-BO) strategy to its deterministic counterpart, partial least squares Bayesian optimisation (PLS-BO), and classical Bayesian optimisation.
- Score: 0.9626666671366836
- License:
- Abstract: Bayesian optimisation is an adaptive sampling strategy for constructing a Gaussian process surrogate to emulate a black-box computational model with the aim of efficiently searching for the global minimum. However, Gaussian processes have limited applicability for engineering problems with many design variables. Their scalability can be significantly improved by identifying a low-dimensional vector of latent variables that serve as inputs to the Gaussian process. In this paper, we introduce a multi-view learning strategy that considers both the input design variables and output data representing the objective or constraint functions, to identify a low-dimensional space of latent variables. Adopting a fully probabilistic viewpoint, we use probabilistic partial least squares (PPLS) to learn an orthogonal mapping from the design variables to the latent variables using training data consisting of inputs and outputs of the black-box computational model. The latent variables and posterior probability densities of the probabilistic partial least squares and Gaussian process models are determined sequentially and iteratively, with retraining occurring at each adaptive sampling iteration. We compare the proposed probabilistic partial least squares Bayesian optimisation (PPLS-BO) strategy to its deterministic counterpart, partial least squares Bayesian optimisation (PLS-BO), and classical Bayesian optimisation, demonstrating significant improvements in convergence to the global minimum.
Related papers
- Sample-efficient Bayesian Optimisation Using Known Invariances [56.34916328814857]
We show that vanilla and constrained BO algorithms are inefficient when optimising invariant objectives.
We derive a bound on the maximum information gain of these invariant kernels.
We use our method to design a current drive system for a nuclear fusion reactor, finding a high-performance solution.
arXiv Detail & Related papers (2024-10-22T12:51:46Z) - A Stochastic Approach to Bi-Level Optimization for Hyperparameter Optimization and Meta Learning [74.80956524812714]
We tackle the general differentiable meta learning problem that is ubiquitous in modern deep learning.
These problems are often formalized as Bi-Level optimizations (BLO)
We introduce a novel perspective by turning a given BLO problem into a ii optimization, where the inner loss function becomes a smooth distribution, and the outer loss becomes an expected loss over the inner distribution.
arXiv Detail & Related papers (2024-10-14T12:10:06Z) - Variational Bayesian surrogate modelling with application to robust design optimisation [0.9626666671366836]
Surrogate models provide a quick-to-evaluate approximation to complex computational models.
We consider Bayesian inference for constructing statistical surrogates with input uncertainties and dimensionality reduction.
We demonstrate intrinsic and robust structural optimisation problems where cost functions depend on a weighted sum of the mean and standard deviation of model outputs.
arXiv Detail & Related papers (2024-04-23T09:22:35Z) - Enhancing Gaussian Process Surrogates for Optimization and Posterior Approximation via Random Exploration [2.984929040246293]
novel noise-free Bayesian optimization strategies that rely on a random exploration step to enhance the accuracy of Gaussian process surrogate models.
New algorithms retain the ease of implementation of the classical GP-UCB, but an additional exploration step facilitates their convergence.
arXiv Detail & Related papers (2024-01-30T14:16:06Z) - Simulation Based Bayesian Optimization [0.5524804393257919]
This paper introduces Simulation Based Bayesian Optimization (SBBO) as a novel approach to optimizing acquisition functions.
GPs are commonly used as the surrogate model as they offer analytical access to posterior predictive distributions.
We demonstrate empirically the effectiveness of SBBO using various choices of surrogate models.
arXiv Detail & Related papers (2024-01-19T16:56:11Z) - Generative Models for Anomaly Detection and Design-Space Dimensionality
Reduction in Shape Optimization [0.0]
Our work presents a novel approach to shape optimization, with the twofold objective to improve the efficiency of global algorithms and to promote the generation of high-quality designs.
This is accomplished by reducing the number of the original design variables defining a new reduced subspace where the geometrical variance is maximized.
From the numerical results, the new framework improves the convergence of global optimization algorithms, while only designs with high-quality geometrical features are generated.
arXiv Detail & Related papers (2023-08-08T04:57:58Z) - Learning Unnormalized Statistical Models via Compositional Optimization [73.30514599338407]
Noise-contrastive estimation(NCE) has been proposed by formulating the objective as the logistic loss of the real data and the artificial noise.
In this paper, we study it a direct approach for optimizing the negative log-likelihood of unnormalized models.
arXiv Detail & Related papers (2023-06-13T01:18:16Z) - Fast Computation of Optimal Transport via Entropy-Regularized Extragradient Methods [75.34939761152587]
Efficient computation of the optimal transport distance between two distributions serves as an algorithm that empowers various applications.
This paper develops a scalable first-order optimization-based method that computes optimal transport to within $varepsilon$ additive accuracy.
arXiv Detail & Related papers (2023-01-30T15:46:39Z) - Sparse high-dimensional linear regression with a partitioned empirical
Bayes ECM algorithm [62.997667081978825]
We propose a computationally efficient and powerful Bayesian approach for sparse high-dimensional linear regression.
Minimal prior assumptions on the parameters are used through the use of plug-in empirical Bayes estimates.
The proposed approach is implemented in the R package probe.
arXiv Detail & Related papers (2022-09-16T19:15:50Z) - Adaptive Sampling of Pareto Frontiers with Binary Constraints Using
Regression and Classification [0.0]
We present a novel adaptive optimization algorithm for black-box multi-objective optimization problems with binary constraints.
Our method is based on probabilistic regression and classification models, which act as a surrogate for the optimization goals.
We also present a novel ellipsoid truncation method to speed up the expected hypervolume calculation.
arXiv Detail & Related papers (2020-08-27T09:15:02Z) - Global Optimization of Gaussian processes [52.77024349608834]
We propose a reduced-space formulation with trained Gaussian processes trained on few data points.
The approach also leads to significantly smaller and computationally cheaper sub solver for lower bounding.
In total, we reduce time convergence by orders of orders of the proposed method.
arXiv Detail & Related papers (2020-05-21T20:59:11Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.