Learning non-stationary and discontinuous functions using clustering,
classification and Gaussian process modelling
- URL: http://arxiv.org/abs/2211.16909v1
- Date: Wed, 30 Nov 2022 11:11:56 GMT
- Title: Learning non-stationary and discontinuous functions using clustering,
classification and Gaussian process modelling
- Authors: M. Moustapha and B. Sudret
- Abstract summary: We propose a three-stage approach for the approximation of non-smooth functions.
The idea is to split the space following the localized behaviors or regimes of the system and build local surrogates.
The approach is tested and validated on two analytical functions and a finite element model of a tensile membrane structure.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Surrogate models have shown to be an extremely efficient aid in solving
engineering problems that require repeated evaluations of an expensive
computational model. They are built by sparsely evaluating the costly original
model and have provided a way to solve otherwise intractable problems. A
crucial aspect in surrogate modelling is the assumption of smoothness and
regularity of the model to approximate. This assumption is however not always
met in reality. For instance in civil or mechanical engineering, some models
may present discontinuities or non-smoothness, e.g., in case of instability
patterns such as buckling or snap-through. Building a single surrogate model
capable of accounting for these fundamentally different behaviors or
discontinuities is not an easy task. In this paper, we propose a three-stage
approach for the approximation of non-smooth functions which combines
clustering, classification and regression. The idea is to split the space
following the localized behaviors or regimes of the system and build local
surrogates that are eventually assembled. A sequence of well-known machine
learning techniques are used: Dirichlet process mixtures models (DPMM), support
vector machines and Gaussian process modelling. The approach is tested and
validated on two analytical functions and a finite element model of a tensile
membrane structure.
Related papers
- Towards Learning Stochastic Population Models by Gradient Descent [0.0]
We show that simultaneous estimation of parameters and structure poses major challenges for optimization procedures.
We demonstrate accurate estimation of models but find that enforcing the inference of parsimonious, interpretable models drastically increases the difficulty.
arXiv Detail & Related papers (2024-04-10T14:38:58Z) - Solving Inverse Problems with Model Mismatch using Untrained Neural Networks within Model-based Architectures [14.551812310439004]
We introduce an untrained forward model residual block within the model-based architecture to match the data consistency in the measurement domain for each instance.
Our approach offers a unified solution that is less parameter-sensitive, requires no additional data, and enables simultaneous fitting of the forward model and reconstruction in a single pass.
arXiv Detail & Related papers (2024-03-07T19:02:13Z) - Hierarchical-Hyperplane Kernels for Actively Learning Gaussian Process
Models of Nonstationary Systems [5.1672267755831705]
We present a kernel family that incorporates a partitioning that is learnable via gradient-based methods.
We empirically demonstrate excellent performance on various active learning tasks.
arXiv Detail & Related papers (2023-03-17T14:50:51Z) - Learning continuous models for continuous physics [94.42705784823997]
We develop a test based on numerical analysis theory to validate machine learning models for science and engineering applications.
Our results illustrate how principled numerical analysis methods can be coupled with existing ML training/testing methodologies to validate models for science and engineering applications.
arXiv Detail & Related papers (2022-02-17T07:56:46Z) - Network Estimation by Mixing: Adaptivity and More [2.3478438171452014]
We propose a mixing strategy that leverages available arbitrary models to improve their individual performances.
The proposed method is computationally efficient and almost tuning-free.
We show that the proposed method performs equally well as the oracle estimate when the true model is included as individual candidates.
arXiv Detail & Related papers (2021-06-05T05:17:04Z) - Mixtures of Gaussian Processes for regression under multiple prior
distributions [0.0]
We extend the idea of Mixture models for Gaussian Process regression in order to work with multiple prior beliefs at once.
We consider the usage of our approach to additionally account for the problem of prior misspecification in functional regression problems.
arXiv Detail & Related papers (2021-04-19T10:19:14Z) - Anomaly Detection of Time Series with Smoothness-Inducing Sequential
Variational Auto-Encoder [59.69303945834122]
We present a Smoothness-Inducing Sequential Variational Auto-Encoder (SISVAE) model for robust estimation and anomaly detection of time series.
Our model parameterizes mean and variance for each time-stamp with flexible neural networks.
We show the effectiveness of our model on both synthetic datasets and public real-world benchmarks.
arXiv Detail & Related papers (2021-02-02T06:15:15Z) - Robust Finite Mixture Regression for Heterogeneous Targets [70.19798470463378]
We propose an FMR model that finds sample clusters and jointly models multiple incomplete mixed-type targets simultaneously.
We provide non-asymptotic oracle performance bounds for our model under a high-dimensional learning framework.
The results show that our model can achieve state-of-the-art performance.
arXiv Detail & Related papers (2020-10-12T03:27:07Z) - Goal-directed Generation of Discrete Structures with Conditional
Generative Models [85.51463588099556]
We introduce a novel approach to directly optimize a reinforcement learning objective, maximizing an expected reward.
We test our methodology on two tasks: generating molecules with user-defined properties and identifying short python expressions which evaluate to a given target value.
arXiv Detail & Related papers (2020-10-05T20:03:13Z) - Non-parametric Models for Non-negative Functions [48.7576911714538]
We provide the first model for non-negative functions from the same good linear models.
We prove that it admits a representer theorem and provide an efficient dual formulation for convex problems.
arXiv Detail & Related papers (2020-07-08T07:17:28Z) - Good Classifiers are Abundant in the Interpolating Regime [64.72044662855612]
We develop a methodology to compute precisely the full distribution of test errors among interpolating classifiers.
We find that test errors tend to concentrate around a small typical value $varepsilon*$, which deviates substantially from the test error of worst-case interpolating model.
Our results show that the usual style of analysis in statistical learning theory may not be fine-grained enough to capture the good generalization performance observed in practice.
arXiv Detail & Related papers (2020-06-22T21:12:31Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.