A Robust Asymmetric Kernel Function for Bayesian Optimization, with
Application to Image Defect Detection in Manufacturing Systems
- URL: http://arxiv.org/abs/2109.10898v1
- Date: Wed, 22 Sep 2021 17:59:05 GMT
- Title: A Robust Asymmetric Kernel Function for Bayesian Optimization, with
Application to Image Defect Detection in Manufacturing Systems
- Authors: Areej AlBahar and Inyoung Kim and Xiaowei Yue
- Abstract summary: We propose a robust kernel function, Asymmetric Elastic Net Radial Basis Function (AEN-RBF)
We show theoretically that AEN-RBF can realize smaller mean squared prediction error under mild conditions.
We also show that the AEN-RBF kernel function is less sensitive to outliers.
- Score: 2.4278445972594525
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Some response surface functions in complex engineering systems are usually
highly nonlinear, unformed, and expensive-to-evaluate. To tackle this
challenge, Bayesian optimization, which conducts sequential design via a
posterior distribution over the objective function, is a critical method used
to find the global optimum of black-box functions. Kernel functions play an
important role in shaping the posterior distribution of the estimated function.
The widely used kernel function, e.g., radial basis function (RBF), is very
vulnerable and susceptible to outliers; the existence of outliers is causing
its Gaussian process surrogate model to be sporadic. In this paper, we propose
a robust kernel function, Asymmetric Elastic Net Radial Basis Function
(AEN-RBF). Its validity as a kernel function and computational complexity are
evaluated. When compared to the baseline RBF kernel, we prove theoretically
that AEN-RBF can realize smaller mean squared prediction error under mild
conditions. The proposed AEN-RBF kernel function can also realize faster
convergence to the global optimum. We also show that the AEN-RBF kernel
function is less sensitive to outliers, and hence improves the robustness of
the corresponding Bayesian optimization with Gaussian processes. Through
extensive evaluations carried out on synthetic and real-world optimization
problems, we show that AEN-RBF outperforms existing benchmark kernel functions.
Related papers
- Kernel-Based Function Approximation for Average Reward Reinforcement Learning: An Optimist No-Regret Algorithm [11.024396385514864]
We consider kernel-based function for approximation RL in the infinite horizon average reward setting.
We propose an optimistic algorithm, similar to acquisition function based algorithms in the special case of bandits.
arXiv Detail & Related papers (2024-10-30T23:04:10Z) - Global Optimization of Gaussian Process Acquisition Functions Using a Piecewise-Linear Kernel Approximation [2.3342885570554652]
We introduce a piecewise approximation for process kernels and a corresponding MIQP representation for acquisition functions.
We empirically demonstrate the framework on synthetic functions, constrained benchmarks, and hyper tuning tasks.
arXiv Detail & Related papers (2024-10-22T10:56:52Z) - An appointment with Reproducing Kernel Hilbert Space generated by
Generalized Gaussian RBF as $L^2-$measure [3.9931474959554496]
The Generalized Gaussian Radial Basis Function (RBF) Kernels are the most-often-employed kernels in artificial intelligence and machine learning routines.
This manuscript demonstrates the application of the Generalized Gaussian RBF in the kernel sense on the aforementioned machine learning routines along with the comparisons against the aforementioned functions as well.
arXiv Detail & Related papers (2023-12-17T12:02:10Z) - Equation Discovery with Bayesian Spike-and-Slab Priors and Efficient Kernels [57.46832672991433]
We propose a novel equation discovery method based on Kernel learning and BAyesian Spike-and-Slab priors (KBASS)
We use kernel regression to estimate the target function, which is flexible, expressive, and more robust to data sparsity and noises.
We develop an expectation-propagation expectation-maximization algorithm for efficient posterior inference and function estimation.
arXiv Detail & Related papers (2023-10-09T03:55:09Z) - Promises and Pitfalls of the Linearized Laplace in Bayesian Optimization [73.80101701431103]
The linearized-Laplace approximation (LLA) has been shown to be effective and efficient in constructing Bayesian neural networks.
We study the usefulness of the LLA in Bayesian optimization and highlight its strong performance and flexibility.
arXiv Detail & Related papers (2023-04-17T14:23:43Z) - Surrogate modeling for Bayesian optimization beyond a single Gaussian
process [62.294228304646516]
We propose a novel Bayesian surrogate model to balance exploration with exploitation of the search space.
To endow function sampling with scalability, random feature-based kernel approximation is leveraged per GP model.
To further establish convergence of the proposed EGP-TS to the global optimum, analysis is conducted based on the notion of Bayesian regret.
arXiv Detail & Related papers (2022-05-27T16:43:10Z) - Meta-Learning Hypothesis Spaces for Sequential Decision-making [79.73213540203389]
We propose to meta-learn a kernel from offline data (Meta-KeL)
Under mild conditions, we guarantee that our estimated RKHS yields valid confidence sets.
We also empirically evaluate the effectiveness of our approach on a Bayesian optimization task.
arXiv Detail & Related papers (2022-02-01T17:46:51Z) - Hybrid Random Features [60.116392415715275]
We propose a new class of random feature methods for linearizing softmax and Gaussian kernels called hybrid random features (HRFs)
HRFs automatically adapt the quality of kernel estimation to provide most accurate approximation in the defined regions of interest.
arXiv Detail & Related papers (2021-10-08T20:22:59Z) - Flow-based Kernel Prior with Application to Blind Super-Resolution [143.21527713002354]
Kernel estimation is generally one of the key problems for blind image super-resolution (SR)
This paper proposes a normalizing flow-based kernel prior (FKP) for kernel modeling.
Experiments on synthetic and real-world images demonstrate that the proposed FKP can significantly improve the kernel estimation accuracy.
arXiv Detail & Related papers (2021-03-29T22:37:06Z) - From Majorization to Interpolation: Distributionally Robust Learning
using Kernel Smoothing [1.2891210250935146]
We study the function approximation aspect of distributionally robust optimization (DRO) based on probability metrics.
This paper instead proposes robust learning algorithms based on smooth function approximation and convolution.
arXiv Detail & Related papers (2021-02-16T22:25:18Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.